We all really are just rats in the Facebook maze
June 28, 2014 7:05 AM Subscribe
Facebook scientists, having apparently become bored with optimizing advertising algorithms, are now running social science experiments on the users.
Link to the actual paper.
I assume they are already selling this to the advertisers as a way to alter "brand perceptions."
The fact that the science is interesting does not negate the fact that Facebook the corporate entity very well may use the knowledge for less than altruistic purposes.
posted by COD at 7:22 AM on June 28, 2014 [20 favorites]
posted by COD at 7:22 AM on June 28, 2014 [20 favorites]
The subjects did not willingly sign up to be research subjects in this emotional manipulation experiment; you can argue all day long it was in the T&C, and that they are respected researchers, etc, but if you ask how the Facebook users feel about it (without manipulating their data feeds of course), they would probably be pretty upset.
And of course Facebook is going to sell this as a new advertising model; it's what they do.
posted by Old'n'Busted at 7:22 AM on June 28, 2014 [81 favorites]
And of course Facebook is going to sell this as a new advertising model; it's what they do.
posted by Old'n'Busted at 7:22 AM on June 28, 2014 [81 favorites]
The research could certainly have been creepier, but is it really accepted as ethical in academic circles that permission to experiment on someone's emotional state by filtering their perceptions of their friends could be gained via a clause slipped into a web site's terms of use?
posted by XMLicious at 7:25 AM on June 28, 2014 [94 favorites]
posted by XMLicious at 7:25 AM on June 28, 2014 [94 favorites]
I don't remember volunteering to participate in this. If I had tried this in graduate school, the institutional review board would have crapped bricks.
From the article:
"In order to sign up for Facebook, users must click a box saying they agree to the Facebook Data Use Policy, giving the company the right to access and use the information posted on the site. The policy lists a variety of potential uses for your data, most of them related to advertising, but there’s also a bit about “internal operations, including troubleshooting, data analysis, testing, research and service improvement.” In the study, the authors point out that they stayed within the data policy’s liberal constraints by using machine analysis to pick out positive and negative posts, meaning no user data containing personal information was actually viewed by human researchers. And there was no need to ask study “participants” for consent, as they’d already given it by agreeing to Facebook’s terms of service in the first place."
posted by mecran01 at 7:25 AM on June 28, 2014 [17 favorites]
From the article:
"In order to sign up for Facebook, users must click a box saying they agree to the Facebook Data Use Policy, giving the company the right to access and use the information posted on the site. The policy lists a variety of potential uses for your data, most of them related to advertising, but there’s also a bit about “internal operations, including troubleshooting, data analysis, testing, research and service improvement.” In the study, the authors point out that they stayed within the data policy’s liberal constraints by using machine analysis to pick out positive and negative posts, meaning no user data containing personal information was actually viewed by human researchers. And there was no need to ask study “participants” for consent, as they’d already given it by agreeing to Facebook’s terms of service in the first place."
posted by mecran01 at 7:25 AM on June 28, 2014 [17 favorites]
I am shocked. SHOCKED.
Does anyone actually pretend Facebook does anything for altruistic purposes?
Standard reminder: if something is free to you, you are not the customer. You are the product.
posted by dry white toast at 7:26 AM on June 28, 2014 [11 favorites]
Does anyone actually pretend Facebook does anything for altruistic purposes?
Standard reminder: if something is free to you, you are not the customer. You are the product.
posted by dry white toast at 7:26 AM on June 28, 2014 [11 favorites]
Sigmund Freud and Edward Bernays explored essentially the same questions. One was a respected researcher and the other was an evil hack.
posted by localroger at 7:28 AM on June 28, 2014 [4 favorites]
posted by localroger at 7:28 AM on June 28, 2014 [4 favorites]
The phrasing in the post is pejorative, but putting that aside, I think it's a fair question about whether this research was ethical.
This is human research, on a large scale, with no informed consent (EULA notwithstanding) and no oversight board. And while I'm generally against IRB overreach, I think there should have been one here. The authors freely admit that they were manipulating facebook users' emotional states.
This is also part of a broader issue: Silicon Valley's creating a product to attract users and then using the information gleaned from that use in ways that the users aren't aware of (and often would object to if they were aware of.) Including to manipulate their behavior in ways inimical to their self-interest.
Not gonna plug my book again, but, man, this is basically the subject of the penultimate chapter.
posted by cgs06 at 7:29 AM on June 28, 2014 [33 favorites]
This is human research, on a large scale, with no informed consent (EULA notwithstanding) and no oversight board. And while I'm generally against IRB overreach, I think there should have been one here. The authors freely admit that they were manipulating facebook users' emotional states.
This is also part of a broader issue: Silicon Valley's creating a product to attract users and then using the information gleaned from that use in ways that the users aren't aware of (and often would object to if they were aware of.) Including to manipulate their behavior in ways inimical to their self-interest.
Not gonna plug my book again, but, man, this is basically the subject of the penultimate chapter.
posted by cgs06 at 7:29 AM on June 28, 2014 [33 favorites]
Despite all my rage I am still just a stat on a page
posted by oulipian at 7:30 AM on June 28, 2014 [144 favorites]
posted by oulipian at 7:30 AM on June 28, 2014 [144 favorites]
I have no idea if these particular social scientists are associated with the Dark Forces of Control (à la Bernays), but there's nothing intrinsically evil about using social media to study group behavior. The social scientists I know are quite jazzed about devising studies using such venues, and for good reason, it seems to me.
posted by mondo dentro at 7:32 AM on June 28, 2014
posted by mondo dentro at 7:32 AM on June 28, 2014
localroger: You could make the case that either one was the respected researcher and either one was the evil hack. Which is which?
posted by curuinor at 7:32 AM on June 28, 2014 [1 favorite]
posted by curuinor at 7:32 AM on June 28, 2014 [1 favorite]
Ethics aside, this seems rather obvious. If your newsfeed is filled with negative posts, of course you're going to be influenced by that. In fact, I have unfollowed quite a few people because they're such Debbie Downers.
posted by monospace at 7:33 AM on June 28, 2014
posted by monospace at 7:33 AM on June 28, 2014
I'm persuaded that Facebook will always remain true to its founding precept: "They trust me -- dumb fucks."
The boy, as they say, is father to the man.
posted by George_Spiggott at 7:37 AM on June 28, 2014 [30 favorites]
The boy, as they say, is father to the man.
posted by George_Spiggott at 7:37 AM on June 28, 2014 [30 favorites]
Mondo dentro: Was it to study group behavior or group manipulation? Because it sure seems like they were doing the latter.
posted by Old'n'Busted at 7:37 AM on June 28, 2014 [7 favorites]
posted by Old'n'Busted at 7:37 AM on June 28, 2014 [7 favorites]
These experiments might have triggered depressive episodes or psychosis in users suffering mental illness. If there is not a massive class action lawsuit against Facebook on behalf of its mentally ill users, it will be a shame. Personally, I think there should be congressional investigations into the extent of deliberate psychological manipulation in social media online if this is true. This is uncharted, very disturbing territory.
posted by saulgoodman at 7:39 AM on June 28, 2014 [32 favorites]
posted by saulgoodman at 7:39 AM on June 28, 2014 [32 favorites]
They may be respected academic researchers, but they owe their subjects an explanation on ethics and their interpretation of informed consent. Lifting a link off of waxpancake, the Chickens in Envelopes blog has a great writeup of this issue: Facebook and their Psychology Experiment.
I do find this kind of research fascinating though. Jeffrey Lin (aka RiotLyte) has done a bunch of work on the contagion of negativity in the game League of Legends, I wrote a summary of some of it last year on my blog. Long story short, they categorize chat logs as positive or negative ("toxic"), then track the trends in the population. One key finding is if a normally friendly person is in a game with a bunch of flaming assholes, that person is more likely to be a flaming asshole himself in the next game. Ie: negative behavior is contagious. Since learning about their work I've observed that in myself, and used that awareness to try not to carry negative bullshit with me.
Riot has hard data and an academic treatment of it going back a few years now, and without running ethically dubious experiments. It's good work.
posted by Nelson at 7:40 AM on June 28, 2014 [23 favorites]
I do find this kind of research fascinating though. Jeffrey Lin (aka RiotLyte) has done a bunch of work on the contagion of negativity in the game League of Legends, I wrote a summary of some of it last year on my blog. Long story short, they categorize chat logs as positive or negative ("toxic"), then track the trends in the population. One key finding is if a normally friendly person is in a game with a bunch of flaming assholes, that person is more likely to be a flaming asshole himself in the next game. Ie: negative behavior is contagious. Since learning about their work I've observed that in myself, and used that awareness to try not to carry negative bullshit with me.
Riot has hard data and an academic treatment of it going back a few years now, and without running ethically dubious experiments. It's good work.
posted by Nelson at 7:40 AM on June 28, 2014 [23 favorites]
In fact, I unfollowed quite a few people on my friends list because they're such Debbie Downers.
Well I guess we know who was randomly selected to only receive negative messages.
posted by XMLicious at 7:40 AM on June 28, 2014 [17 favorites]
The advertising algorithm is basically doing the same thing - trying to alter our mental state with regards to certain products. Now they've shown (probably obvious in retrospect ) that they can do it by manipulating what we see, without injecting ads into our stream.
So what happens if the Republicans (or Democrats, pick your poison) throw silly money at FB in an exclusive contract to push us left or right in our voting tendencies?
posted by COD at 7:41 AM on June 28, 2014 [7 favorites]
So what happens if the Republicans (or Democrats, pick your poison) throw silly money at FB in an exclusive contract to push us left or right in our voting tendencies?
posted by COD at 7:41 AM on June 28, 2014 [7 favorites]
is really a slap in the face to respected academic researchers.
Your friends are running experiments that includes reducing the amount of positive expressions I see when I visit Facebook, in order to confirm their theory that it makes me less happy. If they were here, I'd slap them in the face myself.
posted by effbot at 7:41 AM on June 28, 2014 [116 favorites]
Your friends are running experiments that includes reducing the amount of positive expressions I see when I visit Facebook, in order to confirm their theory that it makes me less happy. If they were here, I'd slap them in the face myself.
posted by effbot at 7:41 AM on June 28, 2014 [116 favorites]
Now I have to weigh the cost of friends that I probably won't stay in touch with outside of Facebook vs. being the unwitting emotional plaything of Facebook's legions of social scientists.
posted by mecran01 at 7:46 AM on June 28, 2014 [15 favorites]
posted by mecran01 at 7:46 AM on June 28, 2014 [15 favorites]
What a great day! The sun is shining, the sky is blue — it's just perfect out. When I woke up, I really honestly could not have been happier with the way things were shaping up today. And then I learned about this fun and exciting new study that was published. Well I for one am positively tickled! People being happy can make other people happy! That's swell! I want you all to know that I love everyone and hope you have a fabulous weekend. Be kind, loving, and forgiving to yourselves and others. The world is a beautiful place, and we are privileged to live here.
posted by compartment at 7:50 AM on June 28, 2014 [32 favorites]
posted by compartment at 7:50 AM on June 28, 2014 [32 favorites]
Mecran01 nailed it... There are people that I ONLY know through Facebook, people I've never met in person who, over years of contact, have become good friends... This, in addition to some other folks who, for various reasons, I would lose touch with if I didn't check facebook once in a while... but... the bad taste in my mouth about this "research", along with all the other crap Facebook does, makes it real tempting to cut that particular digital cord.
posted by HuronBob at 7:51 AM on June 28, 2014 [5 favorites]
posted by HuronBob at 7:51 AM on June 28, 2014 [5 favorites]
I guess this answers the question "Why won't Facebook just show me all of my friends' posts?"
posted by hydropsyche at 7:51 AM on June 28, 2014 [69 favorites]
posted by hydropsyche at 7:51 AM on June 28, 2014 [69 favorites]
localroger: You could make the case that either one was the respected researcher and either one was the evil hack. Which is which?
Unless I missed the part where Freud took money and power for offering his services to the purveyors of war and cigarettes, I'd think that a pretty obvious answer.
posted by localroger at 7:51 AM on June 28, 2014 [2 favorites]
Unless I missed the part where Freud took money and power for offering his services to the purveyors of war and cigarettes, I'd think that a pretty obvious answer.
posted by localroger at 7:51 AM on June 28, 2014 [2 favorites]
Sounds like the CITI program has a new example for their "History and Ethics of Human Subjects Research" course. Anyone who has gone through the human subjects research training should be able to recognize that at the very least they're in a really unpleasant gray area, or simply not conducting ethical research - manipulation of emotional state without alerting the user they're taking part in a study?
...and from the paper their NLP is weak. One negative word? What's the context of the word use? no semantic analysis? With even some basic semantic analysis (which they should have at Facebook - if they're not doing using those tools to identify the correct ads to serve up, well, shame on them*?) they could have gotten much more accurate results regarding the author's actual emotional state and their intent. Maybe they hoped throwing a lot of data at the problem would hide the noise? Hard to say without looking at the underlying data.
*actually, given the last set of ads I saw on Facebook, no, they don't seem to be doing that - no context aware ads at all. Maybe that's just a result of the lack of content in my feed...
posted by combinatorial explosion at 7:52 AM on June 28, 2014 [7 favorites]
...and from the paper their NLP is weak. One negative word? What's the context of the word use? no semantic analysis? With even some basic semantic analysis (which they should have at Facebook - if they're not doing using those tools to identify the correct ads to serve up, well, shame on them*?) they could have gotten much more accurate results regarding the author's actual emotional state and their intent. Maybe they hoped throwing a lot of data at the problem would hide the noise? Hard to say without looking at the underlying data.
*actually, given the last set of ads I saw on Facebook, no, they don't seem to be doing that - no context aware ads at all. Maybe that's just a result of the lack of content in my feed...
posted by combinatorial explosion at 7:52 AM on June 28, 2014 [7 favorites]
I find it amusing that the AV Club article has 3.4k shares on Twitter and zero on Facebook.
posted by Elementary Penguin at 7:52 AM on June 28, 2014 [5 favorites]
posted by Elementary Penguin at 7:52 AM on June 28, 2014 [5 favorites]
My entire experience with Facebook:
About four years ago I created a Facebook account because, hey, it's what everyone does now that discussion sites are so 2003. So I logged in, looked around a bit, shrugged, logged out, and went about my business.
Later that day I got an email from Facebook saying "We noticed you logged in and didn't do anything!" followed by some cool suggestions.
I went back to one of my discussion board homes and said WTF, Facebook makes a record of the fact that I logged in without doing anything? And everyone said O YEA they record every keystroke, everything you read, it's all used to build your profile so they can tailor your feed.
I decided then and there this was not a service that I needed and have never logged in again, although they still email me updates about the few people I friended when I first created the account.
posted by localroger at 7:55 AM on June 28, 2014 [1 favorite]
About four years ago I created a Facebook account because, hey, it's what everyone does now that discussion sites are so 2003. So I logged in, looked around a bit, shrugged, logged out, and went about my business.
Later that day I got an email from Facebook saying "We noticed you logged in and didn't do anything!" followed by some cool suggestions.
I went back to one of my discussion board homes and said WTF, Facebook makes a record of the fact that I logged in without doing anything? And everyone said O YEA they record every keystroke, everything you read, it's all used to build your profile so they can tailor your feed.
I decided then and there this was not a service that I needed and have never logged in again, although they still email me updates about the few people I friended when I first created the account.
posted by localroger at 7:55 AM on June 28, 2014 [1 favorite]
That's it, I'm out, done with Facebook. If there's a line, this is it. This has to be it, right?
posted by sendai sleep master at 7:56 AM on June 28, 2014 [2 favorites]
posted by sendai sleep master at 7:56 AM on June 28, 2014 [2 favorites]
The bummer about the ethics problem is I think generally the Facebook Data scientists did the right thing here. I mean they did a solid experiment and published the results in a public access article. And the results are interesting! Compared to the usual corporate incentive to do everything secret and never contribute any knowledge back to the public sphere, this is a lovely writeup. It's just that their experiment has an ethical flaw, one that confirms users' general view that Facebook treats them with contempt. But I'd sure hate for Facebook's reaction to be "you know, we should stop publishing papers and just keep this data internally for competitive purposes, avoid the hassle."
posted by Nelson at 7:56 AM on June 28, 2014 [8 favorites]
posted by Nelson at 7:56 AM on June 28, 2014 [8 favorites]
Weird. I shared it and my wife did too.
posted by saulgoodman at 7:56 AM on June 28, 2014 [2 favorites]
posted by saulgoodman at 7:56 AM on June 28, 2014 [2 favorites]
Actually, my real considered opinion would be that both were hacks, and only Bernays was evil. You need to do science in order to become a scientist, right? Can't exactly say that Freud improved the state of the science, because he just did casework. Call him the great psychoanalyst.
posted by curuinor at 7:57 AM on June 28, 2014
posted by curuinor at 7:57 AM on June 28, 2014
monospace: "Ethics aside, this seems rather obvious. If your newsfeed is filled with negative posts, of course you're going to be influenced by that."
The question isn't whether you're going to be influenced, but in what way. I think there are a number of competing predictions to social contagion in online spaces that make this a non-obvious result. For one, you might predict that users would ignore posts that reflect emotions they're not currently experiencing (or don't "want" to experience), or you might predict that people would, like you, unfollow others "because they're such Debbie Downers." In which case, the prediction would be that posting habits wouldn't change.
posted by tybeet at 7:58 AM on June 28, 2014 [1 favorite]
The question isn't whether you're going to be influenced, but in what way. I think there are a number of competing predictions to social contagion in online spaces that make this a non-obvious result. For one, you might predict that users would ignore posts that reflect emotions they're not currently experiencing (or don't "want" to experience), or you might predict that people would, like you, unfollow others "because they're such Debbie Downers." In which case, the prediction would be that posting habits wouldn't change.
posted by tybeet at 7:58 AM on June 28, 2014 [1 favorite]
my real considered opinion would be that both were hacks, and only Bernays was evil.
I'm cool with that assessment.
posted by localroger at 7:58 AM on June 28, 2014
I'm cool with that assessment.
posted by localroger at 7:58 AM on June 28, 2014
Unless I missed the part where Freud took money and power for offering his services to the purveyors of war and cigarettes, I'd think that a pretty obvious answer.
Have some cocaine, it will help you wean yourself off of the high you get from name-dropping.
posted by XMLicious at 7:59 AM on June 28, 2014
I went back to one of my discussion board homes and said WTF, Facebook makes a record of the fact that I logged in without doing anything?
They also track you when you're visiting other sites -- pretty much whereever you see a facebook "like" option, that's served from a facebook server, along with cookie tracking, etc.
I don't know for a fact that they continue to track you when you're "logged out", but I can't imagine why they wouldn't. That's useful information to them and they certainly haven't promised not to. "Logged in" and "logged out" are not terms that have a hard technical meaning, they're conventions that they present to you in a way that you expect. But on the things they do under the sheets, they don't have to respect your supposed logged in/out status at all.
posted by George_Spiggott at 8:01 AM on June 28, 2014 [2 favorites]
They also track you when you're visiting other sites -- pretty much whereever you see a facebook "like" option, that's served from a facebook server, along with cookie tracking, etc.
I don't know for a fact that they continue to track you when you're "logged out", but I can't imagine why they wouldn't. That's useful information to them and they certainly haven't promised not to. "Logged in" and "logged out" are not terms that have a hard technical meaning, they're conventions that they present to you in a way that you expect. But on the things they do under the sheets, they don't have to respect your supposed logged in/out status at all.
posted by George_Spiggott at 8:01 AM on June 28, 2014 [2 favorites]
I find it amusing that the AV Club article has 3.4k shares on Twitter and zero on Facebook.
Funny, that. It was in a post at the top of my Facebook feed when I clicked back over here to see how this thread was progressing, and now, its... not. Scrolled down into yesterday. Not visible. I imagine if I hunt down the friend who posted it, it'd still be on his page, but good lord with the "engineering."
Dammit. I have to maintain a FB presence because so many of the caving orgs I belong to have moved to it & disseminate information via groups there. I'm moving from ambivalent back towards fuck these bastards again.
posted by Devils Rancher at 8:02 AM on June 28, 2014 [2 favorites]
Funny, that. It was in a post at the top of my Facebook feed when I clicked back over here to see how this thread was progressing, and now, its... not. Scrolled down into yesterday. Not visible. I imagine if I hunt down the friend who posted it, it'd still be on his page, but good lord with the "engineering."
Dammit. I have to maintain a FB presence because so many of the caving orgs I belong to have moved to it & disseminate information via groups there. I'm moving from ambivalent back towards fuck these bastards again.
posted by Devils Rancher at 8:02 AM on June 28, 2014 [2 favorites]
I'm quite interested in web-based research myself, but am only at the stage of kicking around ideas with friends. I'm really curious about how my Big State U's IRB would handle such experiments.
Based on a perhaps too quick scan of the article, I can't find any indication that standard human subjects ethical criteria (e.g. Declaration of Helsinki) were even in play. This is a problem.
posted by mondo dentro at 8:02 AM on June 28, 2014 [2 favorites]
Based on a perhaps too quick scan of the article, I can't find any indication that standard human subjects ethical criteria (e.g. Declaration of Helsinki) were even in play. This is a problem.
posted by mondo dentro at 8:02 AM on June 28, 2014 [2 favorites]
Have some cocaine
If we had spent half the effort to get rid of war and cigarettes that we've spent to get rid of cocaine, and passed out free cocaine with the savings, there would be a lot of people alive today who are dead instead.
posted by localroger at 8:02 AM on June 28, 2014 [8 favorites]
If we had spent half the effort to get rid of war and cigarettes that we've spent to get rid of cocaine, and passed out free cocaine with the savings, there would be a lot of people alive today who are dead instead.
posted by localroger at 8:02 AM on June 28, 2014 [8 favorites]
People seem to be up in arms about the ethics of this experiment. Can someone explain why the use of data in this case is any more problematic than Facebook using data to tweak its advertising engine? Is it the part where emotions are manipulated? Because that's extremely common in both psychological experiments and marketing.
The best I can come up with is that people feel they have an implicit social contract with Facebook to receive all updates from their friends, and have their shares distributed without filtering or adulteration, and that this contract is being violated.
posted by tybeet at 8:03 AM on June 28, 2014
The best I can come up with is that people feel they have an implicit social contract with Facebook to receive all updates from their friends, and have their shares distributed without filtering or adulteration, and that this contract is being violated.
posted by tybeet at 8:03 AM on June 28, 2014
If you work at a modern research hospital or university you could not do Milgram's experiment or Zimbardo's experiment today because an ethics committee has to look at your experiment beforehand and sign their name and reputation to the harmlessness of your experiment. I'm pretty sure that facebook puts profits before harmlessness.
So the million dollar question is will the next great paradigm shaking psycho research in the Milgram Zimbardo area come from facebook or their coherts?
posted by bukvich at 8:04 AM on June 28, 2014 [2 favorites]
So the million dollar question is will the next great paradigm shaking psycho research in the Milgram Zimbardo area come from facebook or their coherts?
posted by bukvich at 8:04 AM on June 28, 2014 [2 favorites]
Can someone explain why the use of data in this case is any more problematic than Facebook using data to tweak its advertising engine?
We, sadly, don't expect corporations to be ethical. They can lie trough their teeth 24/7 trying to get us to buy stuff. Academic researchers have a much stricter standard they have to conform to. Thank the Goddess.
posted by mondo dentro at 8:06 AM on June 28, 2014 [10 favorites]
We, sadly, don't expect corporations to be ethical. They can lie trough their teeth 24/7 trying to get us to buy stuff. Academic researchers have a much stricter standard they have to conform to. Thank the Goddess.
posted by mondo dentro at 8:06 AM on June 28, 2014 [10 favorites]
It's astonishing that Facebook will spend millions on advertising and PR, and then pull a bone-headed move like this. While the research topic is interesting, the way in which they conducted it is breathtakingly unethical.
Many people trust that Facebook's (and indeed, Google) filtering of news feeds and search results is done in a way that isn't trying to manipulate them. Clearly this is a foolish assumption. The sooner that people understand these companies are not ultimately working on their users' behalf (except where it coincides with their own), the better - no matter what fluffy adverts and PR they produce.
Of course, there's always a large degree of self-interest in what companies do - god knows that Fox News manipulates people's emotions - but the level of control that Facebook and Google have over our perception of reality is large and continuously increasing with every new app and wearable device.
I dearly hope that someone develops filtering technology and agents that will act on their users' behalf rather than advertisers.
posted by adrianhon at 8:06 AM on June 28, 2014 [1 favorite]
Many people trust that Facebook's (and indeed, Google) filtering of news feeds and search results is done in a way that isn't trying to manipulate them. Clearly this is a foolish assumption. The sooner that people understand these companies are not ultimately working on their users' behalf (except where it coincides with their own), the better - no matter what fluffy adverts and PR they produce.
Of course, there's always a large degree of self-interest in what companies do - god knows that Fox News manipulates people's emotions - but the level of control that Facebook and Google have over our perception of reality is large and continuously increasing with every new app and wearable device.
I dearly hope that someone develops filtering technology and agents that will act on their users' behalf rather than advertisers.
posted by adrianhon at 8:06 AM on June 28, 2014 [1 favorite]
Is it the part where emotions are manipulated?
Ding!
Because that's extremely common in both psychological experiments and marketing.
Except in most cases you give informed consent, e.g. it's not buried in the middle of a 20 page TOS that nobody ever reads and you have to click through to get some service you've been told you need in order to keep up with all your friends. Everyone who participated in, for example, the Stanford Prison Experiment at least knew they were experimental test subjects.
posted by localroger at 8:06 AM on June 28, 2014 [26 favorites]
Ding!
Because that's extremely common in both psychological experiments and marketing.
Except in most cases you give informed consent, e.g. it's not buried in the middle of a 20 page TOS that nobody ever reads and you have to click through to get some service you've been told you need in order to keep up with all your friends. Everyone who participated in, for example, the Stanford Prison Experiment at least knew they were experimental test subjects.
posted by localroger at 8:06 AM on June 28, 2014 [26 favorites]
If you don't like what facebook is doing, don't use the service. But for it to exist, it must sell information about its users to advertisers and allow them to sell ads based on that. Some persons are willing to make that trade.
Even this website sells ads.
posted by Ironmouth at 8:06 AM on June 28, 2014 [1 favorite]
Even this website sells ads.
posted by Ironmouth at 8:06 AM on June 28, 2014 [1 favorite]
mondo dentro: "We, sadly, don't expect corporations to be ethical. They can lie trough their teeth 24/7 trying to get us to buy stuff. Academic researchers have a much stricter standard."
How is that standard not being met in this case? Deception is often used in psychological research.
posted by tybeet at 8:07 AM on June 28, 2014 [1 favorite]
How is that standard not being met in this case? Deception is often used in psychological research.
posted by tybeet at 8:07 AM on June 28, 2014 [1 favorite]
How is that standard not being met in this case? Deception is often used in psychological research.
Facebook isn't affiliated with an academic institution and its marketing team is not bound by IRB rules. Subjects in psych research conducted by those affiliated with universities and colleges *must* be informed that they are part of an experiment; the deception cannot take place at that point. Is that clearer?
posted by rtha at 8:09 AM on June 28, 2014 [19 favorites]
Facebook isn't affiliated with an academic institution and its marketing team is not bound by IRB rules. Subjects in psych research conducted by those affiliated with universities and colleges *must* be informed that they are part of an experiment; the deception cannot take place at that point. Is that clearer?
posted by rtha at 8:09 AM on June 28, 2014 [19 favorites]
People keep saying "well duh if you're not paying for it you deserve this" however, is there even a broadly used social networking site that exists that you can pay for? I mean seriously I think it's a useful device, the question is how do we get one in place that doesn't experiment on and exploit those using it. I personally don't think people deserve that even if they failed to read the fine print well. That is NOT informed consent. And yes I think it's worth creating protections for people with low IQ, poor reasoning, or other difficulties understanding and acting in their own interest.
posted by xarnop at 8:11 AM on June 28, 2014 [5 favorites]
posted by xarnop at 8:11 AM on June 28, 2014 [5 favorites]
In my experience of experimental psychology studies at universities, while research subjects usually aren't told exactly what the researchers are looking for, they still:
1) Explicitly sign up to the experiment
2) Receive compensation (often monetary)
3) Have the purpose of the experiment explained to them afterwards
The experiment will also go past an ethics review committee if it matches certain criteria.
Facebook's TOS means that they don't need to do this. Now, even if you wish Facebook the very best, this is not a good move to make in the long term. It erodes trust, not just in Facebook but in every large social network. It contributes to FUD and conspiracy theories.
posted by adrianhon at 8:12 AM on June 28, 2014 [11 favorites]
1) Explicitly sign up to the experiment
2) Receive compensation (often monetary)
3) Have the purpose of the experiment explained to them afterwards
The experiment will also go past an ethics review committee if it matches certain criteria.
Facebook's TOS means that they don't need to do this. Now, even if you wish Facebook the very best, this is not a good move to make in the long term. It erodes trust, not just in Facebook but in every large social network. It contributes to FUD and conspiracy theories.
posted by adrianhon at 8:12 AM on June 28, 2014 [11 favorites]
Another way to look at it is that, once you've agreed to be an experimental test subject, THEN the lab guys can lie to you about what the experiment is, e.g. Milgram wasn't actually testing peoples' ability to learn when punished by electric shocks. But you do at least know you've signed up for a mindfucking and that's what the twenty dollars is for.
posted by localroger at 8:12 AM on June 28, 2014 [8 favorites]
posted by localroger at 8:12 AM on June 28, 2014 [8 favorites]
I have just changed my mind about FaceBook.
No, I'd rather they didn't succeed in helping poor Africans get online. I'll take the loon balloon over this shit anyday.
Its The Hidden Persuaders all over again.
posted by infini at 8:12 AM on June 28, 2014 [3 favorites]
No, I'd rather they didn't succeed in helping poor Africans get online. I'll take the loon balloon over this shit anyday.
Its The Hidden Persuaders all over again.
posted by infini at 8:12 AM on June 28, 2014 [3 favorites]
How is that standard not being met in this case? Deception is often used in psychological research.
Read through the criteria listed in the Declaration of Helskinki (I posted the link above). Was a Review Board able to assess issues in the proposed work such as informed consent and potential for harm? I don't see that that happened.
Honestly, though, setting aside my intrinsic dislike of corporate bullshit, I think this is a gray area that academic researchers need to deal with pronto. Because there's a lot more work like this in the pipeline.
posted by mondo dentro at 8:12 AM on June 28, 2014 [2 favorites]
Read through the criteria listed in the Declaration of Helskinki (I posted the link above). Was a Review Board able to assess issues in the proposed work such as informed consent and potential for harm? I don't see that that happened.
Honestly, though, setting aside my intrinsic dislike of corporate bullshit, I think this is a gray area that academic researchers need to deal with pronto. Because there's a lot more work like this in the pipeline.
posted by mondo dentro at 8:12 AM on June 28, 2014 [2 favorites]
An IRB would have ritually burned this research proposal to cinders, I'd like to believe.
Yes, I know that in the Facebook ToS there's a section that talks about "can [use data] for research." I'm even fine with actual gathering and using data for research, even if for "research" read "marketing research." They have to pay the bills somehow and all that jazz. But manipulating what shows in the content the users have signed up to access, that is to say content produced by their friends/family, and treating what the users are allowed to know about the emotional state of their friends/family as a factor variable to control for the users' own emotional state sort of stretches "use data." Call it spirit of the law vs. letter of the law, but still.
And k8t: I do not want to slap anyone in the face. And I completely understand the attraction of the massive data available and all the excitement the new research possibilities generate. But seriously, if I were on Facebook and all my 65-year-old worry-prone mother two continents away saw from me were my morose posts... Your friends could have thought this one through a little more.
(...so should have PNAS, what the... what?!)
posted by seyirci at 8:13 AM on June 28, 2014 [13 favorites]
Yes, I know that in the Facebook ToS there's a section that talks about "can [use data] for research." I'm even fine with actual gathering and using data for research, even if for "research" read "marketing research." They have to pay the bills somehow and all that jazz. But manipulating what shows in the content the users have signed up to access, that is to say content produced by their friends/family, and treating what the users are allowed to know about the emotional state of their friends/family as a factor variable to control for the users' own emotional state sort of stretches "use data." Call it spirit of the law vs. letter of the law, but still.
And k8t: I do not want to slap anyone in the face. And I completely understand the attraction of the massive data available and all the excitement the new research possibilities generate. But seriously, if I were on Facebook and all my 65-year-old worry-prone mother two continents away saw from me were my morose posts... Your friends could have thought this one through a little more.
(...so should have PNAS, what the... what?!)
posted by seyirci at 8:13 AM on June 28, 2014 [13 favorites]
I've been concerned about the ways in which entities like Facebook might tweak our news feeds, and recently wrote a piece suggesting they should be encouraged to become information fiduciaries, to avoid not only this kind of experiment but all sorts of things they might try with the benefit of the experiment's results.
But I'm concerned about unfocused blowback here. The IRB rules generally come about in the U.S. through universities that accept Federal funding. So the lesson to Facebook would be not to work with university-based academics on such studies -- or not to publish the studies at all. But they'll still perform the studies. That would be a bad outcome: the experiments will still take place, as they do all the time when companies with our data run their own A/B tests with it and with us. They just won't be shared, or vetted rigorously through academic processes. That's the worst of both worlds.
posted by zittrain at 8:14 AM on June 28, 2014 [6 favorites]
But I'm concerned about unfocused blowback here. The IRB rules generally come about in the U.S. through universities that accept Federal funding. So the lesson to Facebook would be not to work with university-based academics on such studies -- or not to publish the studies at all. But they'll still perform the studies. That would be a bad outcome: the experiments will still take place, as they do all the time when companies with our data run their own A/B tests with it and with us. They just won't be shared, or vetted rigorously through academic processes. That's the worst of both worlds.
posted by zittrain at 8:14 AM on June 28, 2014 [6 favorites]
If you don't like what facebook is doing, don't use the service.
Tempting idea!
posted by Devils Rancher at 8:18 AM on June 28, 2014 [2 favorites]
Tempting idea!
posted by Devils Rancher at 8:18 AM on June 28, 2014 [2 favorites]
I know two of those authors quite well.Please let them know that including the single word "research" in a wall-of-text that nobody actually reads in the terms of use of a website that ostensibly has nothing at all to do with volunteering to be a test subject does not qualify as informed consent. Thank you!
posted by Flunkie at 8:19 AM on June 28, 2014 [71 favorites]
How is that standard not being met in this case? Deception is often used in psychological research.
The key phrase is "informed consent." Here's a definition from the Belmont Report, one of the key documents that sets ethical standards for biomedical and behavioral research.
"... there is widespread agreement that the consent process can be analyzed as containing three elements: information, comprehension and voluntariness.
Information. Most codes of research establish specific items for disclosure intended to assure that subjects are given sufficient information. These items generally include: the research procedure,[nope] their purposes,[nope] risks[nope] and anticipated benefits,[nope] alternative procedures (where therapy is involved),[N/A] and a statement offering the subject the opportunity to ask questions and to withdraw at any time from the research.[nope] Additional items have been proposed, including how subjects are selected, the person responsible for the research, etc....
"Comprehension. The manner and context in which information is conveyed is as important as the information itself. For example, presenting information in a disorganized and rapid fashion, allowing too little time for consideration or curtailing opportunities for questioning,[hmmm....] all may adversely affect a subject's ability to make an informed choice....
"Voluntariness. An agreement to participate in research constitutes a valid consent only if voluntarily given. This element of informed consent requires conditions free of coercion and undue influence. Coercion occurs when an overt threat of harm is intentionally presented by one person to another in order to obtain compliance. Undue influence, by contrast, occurs through an offer of an excessive, unwarranted, inappropriate or improper reward or other overture in order to obtain compliance. Also, inducements that would ordinarily be acceptable may become undue influences if the subject is especially vulnerable."
The only element which is even theoretically satisfied is voluntariness. One could argue that use of Facebook implies voluntariness, which is true to some extent, but I'd argue that it's not fully so as the only way to avoid it is to stop using a service to which many users are now unwilling to leave. There needs to be a less difficult option to opt out if it's to be truly voluntary.
Even in the case where there's deception, *most* of these conditions are met. Here, none are.
posted by cgs06 at 8:19 AM on June 28, 2014 [25 favorites]
The key phrase is "informed consent." Here's a definition from the Belmont Report, one of the key documents that sets ethical standards for biomedical and behavioral research.
"... there is widespread agreement that the consent process can be analyzed as containing three elements: information, comprehension and voluntariness.
Information. Most codes of research establish specific items for disclosure intended to assure that subjects are given sufficient information. These items generally include: the research procedure,[nope] their purposes,[nope] risks[nope] and anticipated benefits,[nope] alternative procedures (where therapy is involved),[N/A] and a statement offering the subject the opportunity to ask questions and to withdraw at any time from the research.[nope] Additional items have been proposed, including how subjects are selected, the person responsible for the research, etc....
"Comprehension. The manner and context in which information is conveyed is as important as the information itself. For example, presenting information in a disorganized and rapid fashion, allowing too little time for consideration or curtailing opportunities for questioning,[hmmm....] all may adversely affect a subject's ability to make an informed choice....
"Voluntariness. An agreement to participate in research constitutes a valid consent only if voluntarily given. This element of informed consent requires conditions free of coercion and undue influence. Coercion occurs when an overt threat of harm is intentionally presented by one person to another in order to obtain compliance. Undue influence, by contrast, occurs through an offer of an excessive, unwarranted, inappropriate or improper reward or other overture in order to obtain compliance. Also, inducements that would ordinarily be acceptable may become undue influences if the subject is especially vulnerable."
The only element which is even theoretically satisfied is voluntariness. One could argue that use of Facebook implies voluntariness, which is true to some extent, but I'd argue that it's not fully so as the only way to avoid it is to stop using a service to which many users are now unwilling to leave. There needs to be a less difficult option to opt out if it's to be truly voluntary.
Even in the case where there's deception, *most* of these conditions are met. Here, none are.
posted by cgs06 at 8:19 AM on June 28, 2014 [25 favorites]
Why is this work interesting despite the common sense conclusion? The result is "obvious" in the sense that the top line result is what you might expect, but the details and magnitude of the effect and statistical confirmation is valuable.
As for interesting, this kind of understanding of emotional contagion gives product designers a tool to influence their community. In League of Legends, for instance, one conclusion from their research has been they need to stop toxic chat behavior immediately, ideally in the very game it's happening, so that they can try to arrest the contagion. They've gotten much more aggressive with chat bans recently, and I think that's not a coincidence. Here on Metafilter our moderators do a great job of removing derailing, unhelpful, snide comments quickly in order to try to keep the discourse at a more intelligent and meaningful level. I think it works. This Facebook paper explains why that kind of intervention works, and to what extent, with hard data.
It's good research. I just wish they'd done this without the ethically troubling direct intervention. It would have only been a bit harder to find subpopulations of Facebook users that accidentally had a negative or positive bias, just study them without manipulation. (Also let's be honest, as corporate actions go this was pretty mild. It's not like Facebook started publishing your purchase history without your consent or anything.)
posted by Nelson at 8:19 AM on June 28, 2014 [3 favorites]
As for interesting, this kind of understanding of emotional contagion gives product designers a tool to influence their community. In League of Legends, for instance, one conclusion from their research has been they need to stop toxic chat behavior immediately, ideally in the very game it's happening, so that they can try to arrest the contagion. They've gotten much more aggressive with chat bans recently, and I think that's not a coincidence. Here on Metafilter our moderators do a great job of removing derailing, unhelpful, snide comments quickly in order to try to keep the discourse at a more intelligent and meaningful level. I think it works. This Facebook paper explains why that kind of intervention works, and to what extent, with hard data.
It's good research. I just wish they'd done this without the ethically troubling direct intervention. It would have only been a bit harder to find subpopulations of Facebook users that accidentally had a negative or positive bias, just study them without manipulation. (Also let's be honest, as corporate actions go this was pretty mild. It's not like Facebook started publishing your purchase history without your consent or anything.)
posted by Nelson at 8:19 AM on June 28, 2014 [3 favorites]
Can someone explain why the use of data in this case is any more problematic than Facebook using data to tweak its advertising engine?
It isn't. It also is far less problematic than the various shows and films that use deception to manipulate people into behaving in some amusing or embarrassing way for the entertainment of others and so the producers can gain income.
The proper comparison isn't to marketing or Borat, though. The proper comparison is to other academic research, which generally requires clear informed consent and approval by one or more IRBs. Yes, it's sort of weird that you can do things to people for shits and giggles that you couldn't do for notionally important scientific research, but that's the world we live in.
It might be that this was approved by the IRB at SFSU or Cornell or both. PNAS says that research with human subjects has to be approved by "the author's institutional review board," but they don't say who the relevant author is on multi-author papers. I've seen more troubling stuff that AFAIK made it through IRB, especially with field experiments. On the other hand, it might be that they only require the approval of Facebook's nonexistent IRB; Lord knows treating the EULA as informed consent doesn't inspire confidence.
posted by ROU_Xenophobe at 8:20 AM on June 28, 2014 [5 favorites]
It isn't. It also is far less problematic than the various shows and films that use deception to manipulate people into behaving in some amusing or embarrassing way for the entertainment of others and so the producers can gain income.
The proper comparison isn't to marketing or Borat, though. The proper comparison is to other academic research, which generally requires clear informed consent and approval by one or more IRBs. Yes, it's sort of weird that you can do things to people for shits and giggles that you couldn't do for notionally important scientific research, but that's the world we live in.
It might be that this was approved by the IRB at SFSU or Cornell or both. PNAS says that research with human subjects has to be approved by "the author's institutional review board," but they don't say who the relevant author is on multi-author papers. I've seen more troubling stuff that AFAIK made it through IRB, especially with field experiments. On the other hand, it might be that they only require the approval of Facebook's nonexistent IRB; Lord knows treating the EULA as informed consent doesn't inspire confidence.
posted by ROU_Xenophobe at 8:20 AM on June 28, 2014 [5 favorites]
The big web companies (yahoo, Microsoft, Google, probably amazon, probably Facebook) have been doing large-scale experimental work on their users for years, mostly to do with advertising and branding under the heading of "computational advertising". The pieces of this work that I've seen have been much creepier in certain ways than this particular Facebook thing.
posted by advil at 8:21 AM on June 28, 2014 [3 favorites]
posted by advil at 8:21 AM on June 28, 2014 [3 favorites]
The danger of Facebook isn't to you, it is to everyone you know.
By posting everything your kids do, from the moment they are born until they are old enough to create their own Facebook accounts (13), every significant emotional occurrence in their histories, well, "the ad man knows you better than you do, my son."
There are products and drugs to 'fix' everything that has ever gone wrong in your life, and there are products and drugs to embiggen everything that has ever gone right in your life.
Facebook is the thought police and soma alike.
posted by Monkey0nCrack at 8:22 AM on June 28, 2014 [5 favorites]
By posting everything your kids do, from the moment they are born until they are old enough to create their own Facebook accounts (13), every significant emotional occurrence in their histories, well, "the ad man knows you better than you do, my son."
There are products and drugs to 'fix' everything that has ever gone wrong in your life, and there are products and drugs to embiggen everything that has ever gone right in your life.
Facebook is the thought police and soma alike.
posted by Monkey0nCrack at 8:22 AM on June 28, 2014 [5 favorites]
Any social scientists here? I'm not one, so I'm not familiar with how IRBs work for such things. I do human subjects experiments in a lab, so the set up is much more obvious. How does conformed consent work if data is being collected by people with clipboards at, say, a mall? Is there some threshold where the level of participation transforms to mere poll-like data collection that obviates the informed consent requirement?
Regardless, the notion that some small phrase in a TOS covers informed consent is an absurd distortion. When corporate lawyers run research ethics reviews, we get... Big Tobacco, Big Pharma and Big Ag! (Hey, wait a minute...)
posted by mondo dentro at 8:23 AM on June 28, 2014
Regardless, the notion that some small phrase in a TOS covers informed consent is an absurd distortion. When corporate lawyers run research ethics reviews, we get... Big Tobacco, Big Pharma and Big Ag! (Hey, wait a minute...)
posted by mondo dentro at 8:23 AM on June 28, 2014
People seem to be up in arms about the ethics of this experiment. Can someone explain why the use of data in this case is any more problematic than Facebook using data to tweak its advertising engine?
Why do people accept advertising? Because ultimately they view knowledge of the availability of products and services as harmless. It's ultimately knowledge.
Why don't people tend to accept concerted efforts to manipulate them, like this? Because the researchers methodically withheld some knowledge from users in favor of others.
The reason I might not have seen a friend's birth notice in their feed is because a researcher decided that, if they withheld it as a part of their fucking experiment, it might depress me. That could well have been a positive outcome to them, because it was the aim of my part of their experiment. For this, they can go to hell.
Furthermore, the experimenters relied on one of Facebook most hated aspects, that the site is inscrutable and mysterious about what things it shows users, and took advantage of it for their experiment. In other words, Facebook just doesn't care about actively improving their product, in fact by leaving it broken they can use it to perform experiments on their users. Fun?
posted by JHarris at 8:23 AM on June 28, 2014 [43 favorites]
Why do people accept advertising? Because ultimately they view knowledge of the availability of products and services as harmless. It's ultimately knowledge.
Why don't people tend to accept concerted efforts to manipulate them, like this? Because the researchers methodically withheld some knowledge from users in favor of others.
The reason I might not have seen a friend's birth notice in their feed is because a researcher decided that, if they withheld it as a part of their fucking experiment, it might depress me. That could well have been a positive outcome to them, because it was the aim of my part of their experiment. For this, they can go to hell.
Furthermore, the experimenters relied on one of Facebook most hated aspects, that the site is inscrutable and mysterious about what things it shows users, and took advantage of it for their experiment. In other words, Facebook just doesn't care about actively improving their product, in fact by leaving it broken they can use it to perform experiments on their users. Fun?
posted by JHarris at 8:23 AM on June 28, 2014 [43 favorites]
zittrain: I'm worried about that too. I'm also fascinated that this kind of complaint didn't exist for the voter participation study they did in 2010.
As you noted in your information fiduciary post, Facebook has been conducting this kind of experiment for years now. Here's the guide they've just published to Designing and Deploying Online Field Experiments, along with a general toolkit for running these experiments. It should also be noted that the UK government's randomized trials coming from the Behavioral Insight Unit also didn't include consent.
Perhaps what makes this emotions oriented paper different is that previous research on emotions and social media, like this project by Microsoft focused on detecting post-partum depression, tended to follow more rigorous standards for consent, and to focus on natural experiments -- looking retrospecively at people's data rather than modifying their Facebook experience?
posted by honest knave at 8:24 AM on June 28, 2014 [3 favorites]
As you noted in your information fiduciary post, Facebook has been conducting this kind of experiment for years now. Here's the guide they've just published to Designing and Deploying Online Field Experiments, along with a general toolkit for running these experiments. It should also be noted that the UK government's randomized trials coming from the Behavioral Insight Unit also didn't include consent.
Perhaps what makes this emotions oriented paper different is that previous research on emotions and social media, like this project by Microsoft focused on detecting post-partum depression, tended to follow more rigorous standards for consent, and to focus on natural experiments -- looking retrospecively at people's data rather than modifying their Facebook experience?
posted by honest knave at 8:24 AM on June 28, 2014 [3 favorites]
So, mefites, my smart computer literate mefites..... who is going to make a better internet? I sort of have this dream of there being a metafilter internet search engine.... a mefitebook.... facefilter....... I guess the whole issue is making the whole thing for cost?
The thing that drives people to use these services though is the connection with EVERYONE and when it's a monthly subscription you don't get EVERYONE as easily, but maybe there are ways to work around that? To have some fees other than advertising but make some of the accessibility free? I'm just saying... mefites... think about it....
posted by xarnop at 8:28 AM on June 28, 2014 [1 favorite]
The thing that drives people to use these services though is the connection with EVERYONE and when it's a monthly subscription you don't get EVERYONE as easily, but maybe there are ways to work around that? To have some fees other than advertising but make some of the accessibility free? I'm just saying... mefites... think about it....
posted by xarnop at 8:28 AM on June 28, 2014 [1 favorite]
In a roundabout way, this all reminds me of one of my all-time favorite Metafilter comments.
posted by Flunkie at 8:29 AM on June 28, 2014 [4 favorites]
posted by Flunkie at 8:29 AM on June 28, 2014 [4 favorites]
Can someone explain why the use of data in this case is any more problematic than Facebook using data to tweak its advertising engine?
You have the same social relation to advertisers as you do to your friends? That's a bit unusual.
posted by effbot at 8:29 AM on June 28, 2014 [4 favorites]
You have the same social relation to advertisers as you do to your friends? That's a bit unusual.
posted by effbot at 8:29 AM on June 28, 2014 [4 favorites]
I get that I'm supposed to be horrified and think it's unethical, but my cynicism about Facebook is such that I've stopped being surprised about any manipulative crappy thing it does.
Also, I primarily use FB on the laptop and my bookmark there is to the (otherwise deprecated) chronological feed. If you use FB mostly through your phone or tablet, you have to click through every time, but the chronological feed seems to minimize the messing around. You'll still get effectively pinned posts when all your friends comment on them, but I know I see posts on my laptop that I miss on my phone.
posted by immlass at 8:29 AM on June 28, 2014 [1 favorite]
Also, I primarily use FB on the laptop and my bookmark there is to the (otherwise deprecated) chronological feed. If you use FB mostly through your phone or tablet, you have to click through every time, but the chronological feed seems to minimize the messing around. You'll still get effectively pinned posts when all your friends comment on them, but I know I see posts on my laptop that I miss on my phone.
posted by immlass at 8:29 AM on June 28, 2014 [1 favorite]
Thanks for those links, honest knave. I think you're right that it's the tweaking of the feeds freaking people out -- but again, those feeds are tweaked left, right, and center. I think we should be worried about that -- but if we focus anger on the publishing of the research, the result will be secret tweaking rather than no tweaking.
posted by zittrain at 8:31 AM on June 28, 2014 [1 favorite]
posted by zittrain at 8:31 AM on June 28, 2014 [1 favorite]
Or is it reasonable to see social network and search engines and the like as a public service, and hold providers accountable for standards of ethics? If holding them to ethical standards means they can't provide the same service for free maybe they should do the ethical thing and go ahead and charge for the service instead of pretend it's free and secretly charge people in experimentation on them without their knowing?
If we think these services should be free accessible services to the public maybe government funds should be involved?
posted by xarnop at 8:32 AM on June 28, 2014 [1 favorite]
If we think these services should be free accessible services to the public maybe government funds should be involved?
posted by xarnop at 8:32 AM on June 28, 2014 [1 favorite]
(a whole other can of worms whether the government would be more trustworthy with all the data)
posted by xarnop at 8:32 AM on June 28, 2014
posted by xarnop at 8:32 AM on June 28, 2014
Why am I not surprised that UCSF is involved in this study? They have a notorious lack of institutional ethics.
I have a close friend in another state who is in the process of dying of cancer. Like, last days and hours. Her family is using social media to update friends and family, which has lead to an outpouring of love and grief. I'd hate to think we are included in some Facebro researcher's "study" on emotional manipulation in what is a very vulnerable period for many of us. Facebook has a very real ability to keep us connected in meaningful ways, and they choose to manipulate those connections and then publicize it after the fact. This suggests to me that their culture is terribly flawed, which flows from the top, and their days as dominant social network are going to be quite limited.
posted by Existential Dread at 8:34 AM on June 28, 2014 [20 favorites]
I have a close friend in another state who is in the process of dying of cancer. Like, last days and hours. Her family is using social media to update friends and family, which has lead to an outpouring of love and grief. I'd hate to think we are included in some Facebro researcher's "study" on emotional manipulation in what is a very vulnerable period for many of us. Facebook has a very real ability to keep us connected in meaningful ways, and they choose to manipulate those connections and then publicize it after the fact. This suggests to me that their culture is terribly flawed, which flows from the top, and their days as dominant social network are going to be quite limited.
posted by Existential Dread at 8:34 AM on June 28, 2014 [20 favorites]
If Facebook had thought that maybe valence of presented posts would affect users' behavior in a particular way, and had just updated their algorithms for everyone, I don't think they'd be getting this kind of anger. I find it odd that doing the same kind of thing but more scientifically gets everyone up in arms.
posted by parudox at 8:34 AM on June 28, 2014 [1 favorite]
posted by parudox at 8:34 AM on June 28, 2014 [1 favorite]
I get your point about driving the "research" underground, zittrain, but that's happening anyway. I imagine you'd also not like our Corporate Overlords to use the occasional "official" scientific publication to mask their more widespread hidden activities.
I'm not saying I have a good solution for this...
posted by mondo dentro at 8:34 AM on June 28, 2014
I'm not saying I have a good solution for this...
posted by mondo dentro at 8:34 AM on June 28, 2014
zittrain: I think people are being angered by the research itself, not the fact of its publishing; I'm not seeing people getting irritated by PNAS, for example. But you're absolutely right that given the experiment has already taken place, it's good they've published it. At least it shines a light on Facebook's practices and makes people realise the extent to which filters are being tweaked, and for what (potential) ends.
posted by adrianhon at 8:36 AM on June 28, 2014
posted by adrianhon at 8:36 AM on June 28, 2014
Why am I not surprised that UCSF is involved in this study? They have a notorious lack of institutional ethics.
Citation needed.
posted by Nelson at 8:36 AM on June 28, 2014 [4 favorites]
Citation needed.
posted by Nelson at 8:36 AM on June 28, 2014 [4 favorites]
Any social scientists here? I'm not one, so I'm not familiar with how IRBs work for such things. I do human subjects experiments in a lab, so the set up is much more obvious. How does conformed consent work if data is being collected by people with clipboards at, say, a mall? Is there some threshold where the level of participation transforms to mere poll-like data collection that obviates the informed consent requirement?
Rule of thumb is generally: Publicly observable behavior, no need for an IRB. So if I want to just sit at a mall and see what clothes people are wearing, or who's interacting with who, no problem.
But if I'm doing anything more than observing- say, going up to somebody with a clipboard and saying "Can I ask you a few questions?" you need to get their consent.
OTOH, if what you're doing is not out of the ordinary for that person, you don't need consent. One of the earliest sociolinguistics experiments involved the researcher going up to salespeople and asking them a question to which the answer was "Fourth floor" (he was interested in those r's there). I think if you were to repeat that experiment today, as long as you weren't tape recording anybody, you would not need consent, as it's assumed that what you're doing is not outside what that person would normally experience.
posted by damayanti at 8:37 AM on June 28, 2014 [3 favorites]
Rule of thumb is generally: Publicly observable behavior, no need for an IRB. So if I want to just sit at a mall and see what clothes people are wearing, or who's interacting with who, no problem.
But if I'm doing anything more than observing- say, going up to somebody with a clipboard and saying "Can I ask you a few questions?" you need to get their consent.
OTOH, if what you're doing is not out of the ordinary for that person, you don't need consent. One of the earliest sociolinguistics experiments involved the researcher going up to salespeople and asking them a question to which the answer was "Fourth floor" (he was interested in those r's there). I think if you were to repeat that experiment today, as long as you weren't tape recording anybody, you would not need consent, as it's assumed that what you're doing is not outside what that person would normally experience.
posted by damayanti at 8:37 AM on June 28, 2014 [3 favorites]
(Also let's be honest, as corporate actions go this was pretty mild. It's not like Facebook started publishing your purchase history without your consent or anything.)
The apprehension this causes for me, and I've never had a Facebook account, doesn't have anything to do with privacy.
Rather it's because it looks like an academically-endorsed toehold into the most worrying way that the companies who assure us we don't need net neutrality, the oppressive governments, and heck, even the pedestrian hackers looking for a thrill or to make a buck, could take advantage of our modern electronically-intermediated interaction with the world: by not simply inserting their messages in with everything else but actually censoring our individual perceptions so that they diverge from other peoples' in the course of manipulating us.
It's like China personally censoring your Google results to individually enforce what you should see.
posted by XMLicious at 8:37 AM on June 28, 2014 [13 favorites]
We're here at Facebook, where we've secretly replaced the upbeat posts you regularly see with nothing. Let's see if anyone can tell the difference!
posted by starman at 8:39 AM on June 28, 2014 [4 favorites]
posted by starman at 8:39 AM on June 28, 2014 [4 favorites]
I suggest folks read the actual study where they discussed what they did. Basically, they didn't manipulate content, they just manipulated just how content appeared in the already heavily manipulated newsfeed.
posted by k8t at 8:42 AM on June 28, 2014 [1 favorite]
posted by k8t at 8:42 AM on June 28, 2014 [1 favorite]
Is there some threshold where the level of participation transforms to mere poll-like data collection that obviates the informed consent requirement?
Academic polls require IRB approval.
posted by ROU_Xenophobe at 8:43 AM on June 28, 2014 [1 favorite]
Academic polls require IRB approval.
posted by ROU_Xenophobe at 8:43 AM on June 28, 2014 [1 favorite]
I get that I'm supposed to be horrified and think it's unethical,
You're not "supposed to be" anything. This is not an episode of a TV show where the writers are creating fictional scenarios to manipulate you toward a particular emotional state. This is a real thing that happened for reasons having nothing to do with shaping your opinions. Only you, like everybody else has grown so accustomed to subtle emotional and psychological manipulation, it's harder and harder for people to engage with events in reality without seeing them as conscious social performances meant to make a point or lead to a particular (false or otherwise mediated) belief. We're getting so accustomed to all incoming information being filtered and manipulated we can't take honest communication for what it is anymore. We often mistrust the motives of our closest friends and family because everyone's so full of shit now, it's impossible to tell what information's not being manipulated for some hidden purpose.
posted by saulgoodman at 8:43 AM on June 28, 2014 [41 favorites]
You're not "supposed to be" anything. This is not an episode of a TV show where the writers are creating fictional scenarios to manipulate you toward a particular emotional state. This is a real thing that happened for reasons having nothing to do with shaping your opinions. Only you, like everybody else has grown so accustomed to subtle emotional and psychological manipulation, it's harder and harder for people to engage with events in reality without seeing them as conscious social performances meant to make a point or lead to a particular (false or otherwise mediated) belief. We're getting so accustomed to all incoming information being filtered and manipulated we can't take honest communication for what it is anymore. We often mistrust the motives of our closest friends and family because everyone's so full of shit now, it's impossible to tell what information's not being manipulated for some hidden purpose.
posted by saulgoodman at 8:43 AM on June 28, 2014 [41 favorites]
Editing what content you see and its context is still very much manipulating content because meaning is context dependent.
posted by saulgoodman at 8:45 AM on June 28, 2014 [11 favorites]
posted by saulgoodman at 8:45 AM on June 28, 2014 [11 favorites]
Reading up, honest knave referred to the UK government's Behavioral Insight Unit (aka the 'nudge' unit).
I recall that at the time there was some debate about the ethics of the nudge unit, e.g. to what extent are we happy about the 'government knows best' attitude of manipulating you for your own good, without your consent or knowledge? Since much of the work proposed or carried out was either for the public good (increasing organ donations, decreasing electricity usage) or for your own basic health (losing weight, stopping smoking) - things that were not too controversial - it didn't blow up much.
The difference with Facebook's study is that the manipulation doesn't appear to be for our own good or for the public good, but for Facebook's (and their advertisers') good. No wonder people are treating this different.
Once again, you might say that we should expect this of Facebook - but the fact is, most people don't. Maybe Metafilter is an outlier and everyone else will love the idea that Facebook can make them happier, I don't know - but they deserve to know.
posted by adrianhon at 8:46 AM on June 28, 2014 [4 favorites]
I recall that at the time there was some debate about the ethics of the nudge unit, e.g. to what extent are we happy about the 'government knows best' attitude of manipulating you for your own good, without your consent or knowledge? Since much of the work proposed or carried out was either for the public good (increasing organ donations, decreasing electricity usage) or for your own basic health (losing weight, stopping smoking) - things that were not too controversial - it didn't blow up much.
The difference with Facebook's study is that the manipulation doesn't appear to be for our own good or for the public good, but for Facebook's (and their advertisers') good. No wonder people are treating this different.
Once again, you might say that we should expect this of Facebook - but the fact is, most people don't. Maybe Metafilter is an outlier and everyone else will love the idea that Facebook can make them happier, I don't know - but they deserve to know.
posted by adrianhon at 8:46 AM on June 28, 2014 [4 favorites]
I suggest folks read the actual study where they discussed what they did. Basically, they didn't manipulate content, they just manipulated just how content appeared in the already heavily manipulated newsfeed.
Who thought they were manipulating content? Their research could have been done as a natural experiment, but they are really only interested in active manipulation of users' news feeds because that can be used to affect the emotional impact of advertising.
posted by stopgap at 8:48 AM on June 28, 2014 [9 favorites]
Who thought they were manipulating content? Their research could have been done as a natural experiment, but they are really only interested in active manipulation of users' news feeds because that can be used to affect the emotional impact of advertising.
posted by stopgap at 8:48 AM on June 28, 2014 [9 favorites]
We're getting so accustomed to all incoming information being filtered and manipulated we can't take honest communication for what it is anymore.
I am the very model of a cynical Gen X-er. /G&S
I stopped believing any commercially-mediated speech was honest a looooong time ago.
posted by immlass at 8:49 AM on June 28, 2014
I am the very model of a cynical Gen X-er. /G&S
I stopped believing any commercially-mediated speech was honest a looooong time ago.
posted by immlass at 8:49 AM on June 28, 2014
That experiment described in that chicken and egg post is an entirely different experiment altogether and wouldn't provide a causal link as the study being discussed did.
posted by k8t at 8:52 AM on June 28, 2014
posted by k8t at 8:52 AM on June 28, 2014
parudox: I think I misunderstood your comment. You're asking, had they done the same manipulation for everyone, would this not be something that raises ethical questions because they wouldn't have obtained meaningful data (no control group)? Then they would just have been curtailing users access to content they signed up to receive (and are paying for by being ad-receivers alongside) for no reason whatsoever.
k8t: No, really, we understand what the study did and for what and to what effect and how they were setting up and testing hypotheses. We just honestly don't think it was a nice thing to do. We don't think the already-heavy-manipulation is a nice thing either because see the previous paragraph about what the users have signed up to receive.
(And before anyone chimes with "if you don't like what they choose to give, you don't get anything..." You're quite right, I don't, because I'm not on Facebook. I'm not going to bang the drum of my own social networks either; I'm wary of them too.)
posted by seyirci at 8:52 AM on June 28, 2014
k8t: No, really, we understand what the study did and for what and to what effect and how they were setting up and testing hypotheses. We just honestly don't think it was a nice thing to do. We don't think the already-heavy-manipulation is a nice thing either because see the previous paragraph about what the users have signed up to receive.
(And before anyone chimes with "if you don't like what they choose to give, you don't get anything..." You're quite right, I don't, because I'm not on Facebook. I'm not going to bang the drum of my own social networks either; I'm wary of them too.)
posted by seyirci at 8:52 AM on June 28, 2014
Emotional Contagion on Facebook? More Like Bad Research Methods
posted by stevil at 8:57 AM on June 28, 2014 [5 favorites]
posted by stevil at 8:57 AM on June 28, 2014 [5 favorites]
The newsfeed is entirely manipulated. It isn't like there is a "true" "authentic" newsfeed out there.
posted by k8t at 8:57 AM on June 28, 2014 [1 favorite]
posted by k8t at 8:57 AM on June 28, 2014 [1 favorite]
I stopped believing any commercially-mediated speech was honest a looooong time ago.
Me too. The problem is, now people are starting to behave as if they're becoming conditioned to apply the same level of wariness and cynical suspicion in their personal lives. I'm seeing anecdotal evidence of such effects all over the place among my various social/peer groups.
posted by saulgoodman at 8:58 AM on June 28, 2014 [2 favorites]
Me too. The problem is, now people are starting to behave as if they're becoming conditioned to apply the same level of wariness and cynical suspicion in their personal lives. I'm seeing anecdotal evidence of such effects all over the place among my various social/peer groups.
posted by saulgoodman at 8:58 AM on June 28, 2014 [2 favorites]
I wish I didn't need FB, but it's become the de facto way to do any kind of creative project promotion in addition to being the most convenient way to stay in touch with my relatives in Germany. And everybody's using its authentication APIs these days.
posted by saulgoodman at 9:00 AM on June 28, 2014 [1 favorite]
posted by saulgoodman at 9:00 AM on June 28, 2014 [1 favorite]
k8t:
From the abstract:
posted by XMLicious at 9:01 AM on June 28, 2014 [11 favorites]
I suggest folks read the actual study where they discussed what they did. Basically, they didn't manipulate content, they just manipulated just how content appeared in the already heavily manipulated newsfeed.
From the abstract:
In these conditions, when a person loaded their News Feed, posts that contained emotional content of the relevant emotional valence, each emotional post had between a 10% and 90% chance (based on their User ID) of being omitted from their News Feed for that specific viewing.This is no different from filtering search results to skew perception of a fact or consensus. "It doesn't technically count as manipulating content because we define the term 'content' at a sub-search-result level" and "Google filters search results anyways" don't make it ethical or less manipulative of people.
posted by XMLicious at 9:01 AM on June 28, 2014 [11 favorites]
I suggest folks read the actual study where they discussed what they did. Basically, they didn't manipulate content
posted by Nelson at 9:01 AM on June 28, 2014 [31 favorites]
The experiment manipulated the extent to which people (N = 689,003) were exposed to emotional expressions in their News Feed. ... Two parallel experiments were conducted ... One in which exposure to friends’ positive emotional content in their News Feed was reduced, and one in which exposure to negative emotional content in their News Feed was reduced.The ethical problem isn't that the feed was manipulated. It's that it was manipulated for 689,000 people specifically to provoke an emotional response. Without their informed consent.
posted by Nelson at 9:01 AM on June 28, 2014 [31 favorites]
They may be respected academic researchers, but they owe their subjects an explanation on ethics and their interpretation of informed consent.
Seems to me that these researchers are ofo the same ilk as Facebook's Founder, Mark Zuckerberg. They are just doing whatever the hell they want to justify their research, and after the deed is done somehow try to spin it as viable and helpful. To whom is it helpful? The researchers and Mark Zuckerberg's business, which is designed for ONE thing only, to serve its advertising customers. George Orwell, author of the prescient novel "1984", is saying "I told you so!".
posted by Vibrissae at 9:01 AM on June 28, 2014 [1 favorite]
Seems to me that these researchers are ofo the same ilk as Facebook's Founder, Mark Zuckerberg. They are just doing whatever the hell they want to justify their research, and after the deed is done somehow try to spin it as viable and helpful. To whom is it helpful? The researchers and Mark Zuckerberg's business, which is designed for ONE thing only, to serve its advertising customers. George Orwell, author of the prescient novel "1984", is saying "I told you so!".
posted by Vibrissae at 9:01 AM on June 28, 2014 [1 favorite]
What does Hüsker Dü think about all this? This.
posted by PHINC at 9:03 AM on June 28, 2014 [2 favorites]
posted by PHINC at 9:03 AM on June 28, 2014 [2 favorites]
The newsfeed is entirely manipulated. It isn't like there is a "true" "authentic" newsfeed out there.
Right, which is why they weren't interested in performing a purely observational study like the League of Legends research discussed above. Facebook wants to know how best to manipulate the news feed so you will be more receptive to buying stuff. They are an advertising company. This is research performed in the service of advertising without appropriate ethical review.
posted by stopgap at 9:03 AM on June 28, 2014 [2 favorites]
Right, which is why they weren't interested in performing a purely observational study like the League of Legends research discussed above. Facebook wants to know how best to manipulate the news feed so you will be more receptive to buying stuff. They are an advertising company. This is research performed in the service of advertising without appropriate ethical review.
posted by stopgap at 9:03 AM on June 28, 2014 [2 favorites]
Citation needed.
Researcher in meningitis lab dies of.....meningitis
Body in SF General stairwell ID'd as missing patient
Even when a hospital researcher reported seeing a woman lying in the stairwell Oct. 4 where Spalding was finally found, the report noted, no one searched the area.
UCSF lapses mean research animals suffer
posted by Existential Dread at 9:07 AM on June 28, 2014 [8 favorites]
Researcher in meningitis lab dies of.....meningitis
Body in SF General stairwell ID'd as missing patient
Even when a hospital researcher reported seeing a woman lying in the stairwell Oct. 4 where Spalding was finally found, the report noted, no one searched the area.
UCSF lapses mean research animals suffer
posted by Existential Dread at 9:07 AM on June 28, 2014 [8 favorites]
Facebook may not require research to go through an IRB, but PNAS allegedly does. From PNAS Editorial policies:
Failure to conform to ethical norms in research is often grounds for retraction.
posted by grouse at 9:09 AM on June 28, 2014 [10 favorites]
Research involving Human and Animal Participants and Clinical Trials must have been approved by the author's institutional review board. Authors must follow the International Committee of Medical Journal Editors' policy and deposit trial information and design into an accepted clinical trial registry before the onset of patient enrollment. Authors must include in the Methods section a brief statement identifying the institutional and/or licensing committee approving the experiments. For experiments involving human participants, authors must also include a statement confirming that informed consent was obtained from all participants. All experiments must have been conducted according to the principles expressed in the Declaration of Helsinki.I see no mention of the institutional and licensing committee approving the experiments, as required.
Failure to conform to ethical norms in research is often grounds for retraction.
posted by grouse at 9:09 AM on June 28, 2014 [10 favorites]
Jamie, the second author in this study started her postdoc at UCSF pretty recently and this project was in the works long before she started there. The third author Jeff was her dissertation advisor at Cornell.
posted by k8t at 9:10 AM on June 28, 2014
posted by k8t at 9:10 AM on June 28, 2014
The ethical problem isn't that the feed was manipulated. It's that it was manipulated for 689,000 people specifically to provoke an emotional response. Without their informed consent.
Yes, this. This would almost certainly not pass an IRB review by any reputable academic institution. If you want to argue, k8t, that it's okay because it wasn't done by researchers bound by an IRB, I guess you can do that, but I still think what they did is unethical bullshit and you can't just wave "informed consent" away because this was done by people you know and like.
posted by rtha at 9:10 AM on June 28, 2014 [39 favorites]
Yes, this. This would almost certainly not pass an IRB review by any reputable academic institution. If you want to argue, k8t, that it's okay because it wasn't done by researchers bound by an IRB, I guess you can do that, but I still think what they did is unethical bullshit and you can't just wave "informed consent" away because this was done by people you know and like.
posted by rtha at 9:10 AM on June 28, 2014 [39 favorites]
I was at a talk at Cornell a few years ago by a FB employee describing some social science experiments... ways to increase user engagement by adjusting what showed up on their feeds, IIRC. The speaker seemed a bit confused when someone asked him during the Q & A about their ethics review process.
posted by Estragon at 9:14 AM on June 28, 2014 [13 favorites]
posted by Estragon at 9:14 AM on June 28, 2014 [13 favorites]
So far in reading about this story for the past hour, everyone who says "I don't see a problem" doesn't seem to understand the concept of informed consent. That's rather scary to me, even more so than this unethical tinkering that facebook has done of the news feeds.
posted by Catblack at 9:18 AM on June 28, 2014 [6 favorites]
posted by Catblack at 9:18 AM on June 28, 2014 [6 favorites]
Rtha, these aren't friends. They are scholars with whom I travel in nearby circles. (My research is on technology use in authoritarian states, they study deception and social support in social media; but we're in the same discipline, so we end up being in the same sub-discipline business meetings.)
I'm not saying that there aren't some ethical issues here - as is the case with a lot of research that uses computational methods and draws from social media data - but I'm merely trying to add a social scientific voice to this conversation.
If this was all done on Facebook's end, these folks probably didn't need IRB approval. Facebook can do whatever the hell it wants with its property. I don't like that, but it is how it is.
Jamie and Jeff haven't talked about what, if any, IRB they went through. I imagine that this will come out in the coming days.
posted by k8t at 9:19 AM on June 28, 2014 [1 favorite]
I'm not saying that there aren't some ethical issues here - as is the case with a lot of research that uses computational methods and draws from social media data - but I'm merely trying to add a social scientific voice to this conversation.
If this was all done on Facebook's end, these folks probably didn't need IRB approval. Facebook can do whatever the hell it wants with its property. I don't like that, but it is how it is.
Jamie and Jeff haven't talked about what, if any, IRB they went through. I imagine that this will come out in the coming days.
posted by k8t at 9:19 AM on June 28, 2014 [1 favorite]
If this was all done on Facebook's end, these folks probably didn't need IRB approval. Facebook can do whatever the hell it wants with its property. I don't like that, but it is how it is.
That is, frankly, a weaselly excuse for doing research that no IRB would allow.
posted by Etrigan at 9:21 AM on June 28, 2014 [27 favorites]
That is, frankly, a weaselly excuse for doing research that no IRB would allow.
posted by Etrigan at 9:21 AM on June 28, 2014 [27 favorites]
Yes, this. This would almost certainly not pass an IRB review by any reputable academic institution.
I'm less sure of that. I've seen a number of recent studies that AFAIK were IRB approved but struck me as ethically troubling. A field experiment where they ound that people are more likely to vote if ou threaten to tell their neighbors whether or not they voted, for example. Or studies where researchers lie to legislators in order to learn about their response patterns.
posted by ROU_Xenophobe at 9:21 AM on June 28, 2014 [2 favorites]
I'm less sure of that. I've seen a number of recent studies that AFAIK were IRB approved but struck me as ethically troubling. A field experiment where they ound that people are more likely to vote if ou threaten to tell their neighbors whether or not they voted, for example. Or studies where researchers lie to legislators in order to learn about their response patterns.
posted by ROU_Xenophobe at 9:21 AM on June 28, 2014 [2 favorites]
If this was all done on Facebook's end, these folks probably didn't need IRB approval. Facebook can do whatever the hell it wants with its property. I don't like that, but it is how it is.
OK, cool. So if I have an experiment that I just know will not pass muster with my IRB, all I need to do is run it at some corporate collaborator's site. Then I can use the data as if it fell from the sky into my lap.
Uh, no. I don't think so.
What do you think the ethical issues are with this kind of work? (Given your area of research, I know you can see there are such issues.) Any ideas how researchers can ethically use social media as a resource?
posted by mondo dentro at 9:23 AM on June 28, 2014 [10 favorites]
OK, cool. So if I have an experiment that I just know will not pass muster with my IRB, all I need to do is run it at some corporate collaborator's site. Then I can use the data as if it fell from the sky into my lap.
Uh, no. I don't think so.
What do you think the ethical issues are with this kind of work? (Given your area of research, I know you can see there are such issues.) Any ideas how researchers can ethically use social media as a resource?
posted by mondo dentro at 9:23 AM on June 28, 2014 [10 favorites]
I'm merely trying to add a social scientific voice to this conversation.
There are other social scientists here, so this statement about what you add to this conversation isn't quite accurate.
posted by grouse at 9:27 AM on June 28, 2014 [13 favorites]
There are other social scientists here, so this statement about what you add to this conversation isn't quite accurate.
posted by grouse at 9:27 AM on June 28, 2014 [13 favorites]
I regular give talks and write about the ethical dilemmas I face analyzing social media data generated by vulnerable populations (dissidents in authoritarian states); trust me, I think about social media/computational social scientific ethical issues daily.
My own Facebook feed is full of researchers that I respect feeling that this wasn't crossing an ethical line. I'm hoping that one of them is going to write something smart about this.
posted by k8t at 9:27 AM on June 28, 2014 [2 favorites]
My own Facebook feed is full of researchers that I respect feeling that this wasn't crossing an ethical line. I'm hoping that one of them is going to write something smart about this.
posted by k8t at 9:27 AM on June 28, 2014 [2 favorites]
k8t: Facebook can do whatever the hell it wants with its property. I don't like that, but it is how it is.
"It is how it is," is a peculiarly fatalistic way of putting it. It suggests that there's nothing to be done about this state of affairs. I disagree. We can expect the world to be different, and we can change laws to require disclosure or consent for experiments like this. Thankfully the EU has shown willingness to get involved in similar matters.
posted by adrianhon at 9:27 AM on June 28, 2014 [7 favorites]
"It is how it is," is a peculiarly fatalistic way of putting it. It suggests that there's nothing to be done about this state of affairs. I disagree. We can expect the world to be different, and we can change laws to require disclosure or consent for experiments like this. Thankfully the EU has shown willingness to get involved in similar matters.
posted by adrianhon at 9:27 AM on June 28, 2014 [7 favorites]
I'm merely trying to add a social scientific voice to this conversation.
As am I. All I can say is that if this crossed my desk as a reviewer, for any journal, I'd recommend rejection on ethical grounds. Even if it had IRB approval.
Mostly because there's no reason at all this study couldn't be done ethically with a smaller N. Properly recruited subjects in some social setting with some false participants, and who are properly debriefed afterwards. You shouldn't need 600,000 subjects to see an effect.
posted by ROU_Xenophobe at 9:29 AM on June 28, 2014 [20 favorites]
As am I. All I can say is that if this crossed my desk as a reviewer, for any journal, I'd recommend rejection on ethical grounds. Even if it had IRB approval.
Mostly because there's no reason at all this study couldn't be done ethically with a smaller N. Properly recruited subjects in some social setting with some false participants, and who are properly debriefed afterwards. You shouldn't need 600,000 subjects to see an effect.
posted by ROU_Xenophobe at 9:29 AM on June 28, 2014 [20 favorites]
My own Facebook feed is full of researchers that I respect feeling that this wasn't crossing an ethical line.
Okay now that's funny.
posted by George_Spiggott at 9:30 AM on June 28, 2014 [56 favorites]
Okay now that's funny.
posted by George_Spiggott at 9:30 AM on June 28, 2014 [56 favorites]
Facebook being able to do whatever they want does not make what they do magically ethical.
posted by rtha at 9:30 AM on June 28, 2014 [12 favorites]
posted by rtha at 9:30 AM on June 28, 2014 [12 favorites]
I'm hoping that one of them is going to write something smart about this.
I would have hoped that PhD-level academic researchers doing work of this type would have already had something smart to say about this.
I am not automatically freaked out by this research--at least any more than I am by other research in, say, biotech and pharmacology. But the apparently complete bypassing of an ethical review process in this particular case is alarming. It blows a huge loophole into IRB review, as far as I can tell. Was there a review process? Are there already commonly accepted standards for how to do such things?
posted by mondo dentro at 9:34 AM on June 28, 2014 [2 favorites]
I would have hoped that PhD-level academic researchers doing work of this type would have already had something smart to say about this.
I am not automatically freaked out by this research--at least any more than I am by other research in, say, biotech and pharmacology. But the apparently complete bypassing of an ethical review process in this particular case is alarming. It blows a huge loophole into IRB review, as far as I can tell. Was there a review process? Are there already commonly accepted standards for how to do such things?
posted by mondo dentro at 9:34 AM on June 28, 2014 [2 favorites]
I'm merely trying to add a social scientific voice to this conversation.
You are by no means the only social scientific voice in this conversation.
FWIW I think the IRB process for social media research is far too onerous in many ways and a lot of interesting and ethical research is being doing without worrying about a formal ethics review. (Particularly privately, as trade secrets inside companies.) I'm glad the Facebook folks did this experiment and I'm glad they published the results, whether it'd be approved by an academic IRB or not. But they treated me and 689,002 other Facebook users like lab rats, and the contempt of that is offensive and deserves comment.
Jamie and Jeff haven't talked about what, if any, IRB they went through. I imagine that this will come out in the coming days.
I sure hope they do. I'm surprised they or PNAS didn't recognize the question would be asked and head off the controversy with an explanation. Particularly since Facebook has such a reputation for treating its users with contempt.
(I'm reminded of the AOL search log debacle, a case of researchers doing a well-meaning and interesting academic thing that backfired terribly.)
posted by Nelson at 9:35 AM on June 28, 2014 [1 favorite]
You are by no means the only social scientific voice in this conversation.
FWIW I think the IRB process for social media research is far too onerous in many ways and a lot of interesting and ethical research is being doing without worrying about a formal ethics review. (Particularly privately, as trade secrets inside companies.) I'm glad the Facebook folks did this experiment and I'm glad they published the results, whether it'd be approved by an academic IRB or not. But they treated me and 689,002 other Facebook users like lab rats, and the contempt of that is offensive and deserves comment.
Jamie and Jeff haven't talked about what, if any, IRB they went through. I imagine that this will come out in the coming days.
I sure hope they do. I'm surprised they or PNAS didn't recognize the question would be asked and head off the controversy with an explanation. Particularly since Facebook has such a reputation for treating its users with contempt.
(I'm reminded of the AOL search log debacle, a case of researchers doing a well-meaning and interesting academic thing that backfired terribly.)
posted by Nelson at 9:35 AM on June 28, 2014 [1 favorite]
One time, a wizard made a machine that the monkeys could push buttons on and their feelings and thoughts and experiences were miracled away to the magic mountain, where the wizard used it all to gather money, knowledge, and power. The monkeys flocked to it; it seemed to make friendship possible again. They also each seemed to sense that the magic machine was somehow degrading. As they continued streaming their digitized lives into the data vaults of the rich and powerful wizards, they sighed.
posted by mbrock at 9:36 AM on June 28, 2014 [6 favorites]
posted by mbrock at 9:36 AM on June 28, 2014 [6 favorites]
George_Spiggott: Okay now that's funny.
But, wait, what if it's the MeFi mods secretly manipulating our emotions?
Perhaps a secret deal with Google to get back their advertising revenue???
SHAKE UP, WEEPLE.
posted by tonycpsu at 9:38 AM on June 28, 2014 [1 favorite]
But, wait, what if it's the MeFi mods secretly manipulating our emotions?
Perhaps a secret deal with Google to get back their advertising revenue???
SHAKE UP, WEEPLE.
posted by tonycpsu at 9:38 AM on June 28, 2014 [1 favorite]
Some interesting reading on ethics and "big data"(sorry for paywalls):
http://www.tandfonline.com/doi/abs/10.1080/1369118X.2012.678878#.U67u3LG8ToE
http://pediatrics.aappublications.org/content/131/Supplement_2/S127.short
http://www.ijdc.net/index.php/ijdc/article/view/214
This is specifically about the question of informed consent:
posted by k8t at 9:38 AM on June 28, 2014 [3 favorites]
http://www.tandfonline.com/doi/abs/10.1080/1369118X.2012.678878#.U67u3LG8ToE
http://pediatrics.aappublications.org/content/131/Supplement_2/S127.short
http://www.ijdc.net/index.php/ijdc/article/view/214
This is specifically about the question of informed consent:
posted by k8t at 9:38 AM on June 28, 2014 [3 favorites]
On the other hand, this makes a dandy excuse to give those special people in your life. "Gee, I'm sure sorry I haven't been liking or commenting on all those articles about Nobama and Hellary Klinton you find, Uncle Ray. Facebook must be messing around with my feed. You know how they are."
posted by Spatch at 9:39 AM on June 28, 2014 [5 favorites]
posted by Spatch at 9:39 AM on June 28, 2014 [5 favorites]
My own Facebook feed is full of researchers that I respect feeling that this wasn't crossing an ethical line.
How likely do you think it is that your Facebook feed -- or anyone else's -- will include researchers who think it does cross ethical lines?
posted by jamjam at 9:40 AM on June 28, 2014 [7 favorites]
How likely do you think it is that your Facebook feed -- or anyone else's -- will include researchers who think it does cross ethical lines?
posted by jamjam at 9:40 AM on June 28, 2014 [7 favorites]
mecran01: “Now I have to weigh the cost of friends that I probably won't stay in touch with outside of Facebook vs. being the unwitting emotional plaything of Facebook's legions of social scientists.”
sendai sleep master: “That's it, I'm out, done with Facebook. If there's a line, this is it. This has to be it, right?”
Devils Rancher: “Dammit. I have to maintain a FB presence because so many of the caving orgs I belong to have moved to it & disseminate information via groups there. I'm moving from ambivalent back towards fuck these bastards again.”That's where I'm at too. I'm very angry about this, but I have obligations to people that more or less require me to check on Facebook at least daily.
Looks like it's time to either turn Facebook into a regulated monopoly or just nationalize it.
posted by ob1quixote at 9:40 AM on June 28, 2014
k8t, do you have any links about the ethics of academic researchers evading independent ethics review by collecting data from non-academic institutions?
posted by grouse at 9:42 AM on June 28, 2014 [2 favorites]
posted by grouse at 9:42 AM on June 28, 2014 [2 favorites]
Maybe Facebook is manipulating your feed's contents to only show researchers who don't feel the study crossed an ethical line, in order to make you happier? ;)
posted by adrianhon at 9:43 AM on June 28, 2014 [9 favorites]
posted by adrianhon at 9:43 AM on June 28, 2014 [9 favorites]
Existential Dread, UCSF folks may work at the VA and the General, but I'm pretty sure those hospitals aren't part of UCSF.
posted by Lyme Drop at 9:44 AM on June 28, 2014
posted by Lyme Drop at 9:44 AM on June 28, 2014
Ironically since sharing a few articles about this on facebook, the facebook page is no longer working for me while the rest of the internet is working fine.
Inteeeereeeestiiing....
posted by xarnop at 9:44 AM on June 28, 2014 [1 favorite]
Inteeeereeeestiiing....
posted by xarnop at 9:44 AM on June 28, 2014 [1 favorite]
Nationalizing Facebook sounds like an utterly catastrophic idea (post-Snowden, etc)...
posted by mbrock at 9:46 AM on June 28, 2014 [2 favorites]
posted by mbrock at 9:46 AM on June 28, 2014 [2 favorites]
How the hell is anybody supposed to trust FB as a platform for doing less sinister business now, knowing they just might be deliberately needling or messing with your audience's emotions in the middle of a PR campaign? I mean, jeez, this stuff is getting way too complicated to understand and appreciate the potential unintended consequences. It's just so damn irresponsible is what gets me.
posted by saulgoodman at 9:48 AM on June 28, 2014 [1 favorite]
posted by saulgoodman at 9:48 AM on June 28, 2014 [1 favorite]
@grouse, I don't know of any articles like this. And I don't think that this is what happened here, to the best of my knowledge.
I know (distantly, like friends of friends) a few folks who have done postdocs or gone on to work at Facebook's research side, and I've never heard any of them say (or anyone else gossiping about them saying) anything about the excitement of evading IRB by using social media data straight from Facebook.
Occasionally I'll hear people say something like... wow, social media and computational methods really give us new tools to answer old questions about collective action or X or Y. But this is almost always followed up by "but we need theoretical guidance!"
posted by k8t at 9:50 AM on June 28, 2014
I know (distantly, like friends of friends) a few folks who have done postdocs or gone on to work at Facebook's research side, and I've never heard any of them say (or anyone else gossiping about them saying) anything about the excitement of evading IRB by using social media data straight from Facebook.
Occasionally I'll hear people say something like... wow, social media and computational methods really give us new tools to answer old questions about collective action or X or Y. But this is almost always followed up by "but we need theoretical guidance!"
posted by k8t at 9:50 AM on June 28, 2014
Fits in well with the Mike Montiero talk How Designers Destroyed the World.
Facebook didn't just happen, it was designed by people who see nothing wrong with monetizing personal relationships.
posted by fifteen schnitzengruben is my limit at 9:50 AM on June 28, 2014 [5 favorites]
Facebook didn't just happen, it was designed by people who see nothing wrong with monetizing personal relationships.
posted by fifteen schnitzengruben is my limit at 9:50 AM on June 28, 2014 [5 favorites]
The abstract of the article in k8t's links (the fourth one) reads:
"I argue that data from SNS(social network sites) are not per se public and research based on these data should not be exempt from the ethical standard that informed consent must be obtained from participants. Based on the concept of privacy in context (Nissenbaum, 2010), I further propose that this is because the norms of distribution and appropriateness are violated when researchers manipulate online contexts and collect data without consent."
posted by lazycomputerkids at 9:50 AM on June 28, 2014 [8 favorites]
"I argue that data from SNS(social network sites) are not per se public and research based on these data should not be exempt from the ethical standard that informed consent must be obtained from participants. Based on the concept of privacy in context (Nissenbaum, 2010), I further propose that this is because the norms of distribution and appropriateness are violated when researchers manipulate online contexts and collect data without consent."
posted by lazycomputerkids at 9:50 AM on June 28, 2014 [8 favorites]
Oh, FWIW, I did a postdoc position in the federal government that involved data collection without IRB approval (not big data, but surveys) and I myself was simultaneously excited (Cool! There's a revolution in Country X and I have the resources to pay for a survey there 2 days later and don't need to wait months for IRB in order to collect this!) and dismayed (But I will have an incredibly difficult time publishing anything from this because of the lack of IRB).
In my experience from that position, sometimes journals will overlook their IRB requirements under extreme circumstances (there was a revolution in country X and you collected data 2 days later and the results of your analysis are incredibly important for our collective understanding of blah blah blah's role in revolution? Okay). But the journal editors and I had to talk about it a lot, they knew that I was an ethical researcher that happened to be in a weird position where there was no IRB, and I wasn't engaging in any deception, manipulation, etc.
posted by k8t at 9:54 AM on June 28, 2014 [1 favorite]
In my experience from that position, sometimes journals will overlook their IRB requirements under extreme circumstances (there was a revolution in country X and you collected data 2 days later and the results of your analysis are incredibly important for our collective understanding of blah blah blah's role in revolution? Okay). But the journal editors and I had to talk about it a lot, they knew that I was an ethical researcher that happened to be in a weird position where there was no IRB, and I wasn't engaging in any deception, manipulation, etc.
posted by k8t at 9:54 AM on June 28, 2014 [1 favorite]
Abstract of second IC study: "Social networking sites (SNSs) provide researchers with an unprecedented amount of user derived personal information. This wealth of information can be invaluable for research purposes. However, the privacy of the SNS user must be protected from both public and private researchers. New research capabilities raise new ethical concerns. We argue that past research regulation has largely been in reaction to questionable research practices, and therefore new innovations need to be regulated before SNS users’ privacy is irreparably compromised. It is the responsibility of the academic community to start this ethical discourse."
posted by k8t at 9:55 AM on June 28, 2014 [1 favorite]
posted by k8t at 9:55 AM on June 28, 2014 [1 favorite]
Lyme Drop, they most certainly are. UCSF residents, attendings, and med students staff those hospitals and form the research staff as well.
posted by Existential Dread at 9:59 AM on June 28, 2014
posted by Existential Dread at 9:59 AM on June 28, 2014
But the journal editors and I had to talk about it a lot, they knew that I was an ethical researcher that happened to be in a weird position where there was no IRB, and I wasn't engaging in any deception, manipulation, etc.
You were analyzing data that had been collected, not manipulating it ahead of time. Do you not see the difference? These researchers did not do a semantic survey of how many people used "happy" after seeing a video of kittens in their feed. They did not do what you did.
posted by rtha at 10:02 AM on June 28, 2014 [3 favorites]
You were analyzing data that had been collected, not manipulating it ahead of time. Do you not see the difference? These researchers did not do a semantic survey of how many people used "happy" after seeing a video of kittens in their feed. They did not do what you did.
posted by rtha at 10:02 AM on June 28, 2014 [3 favorites]
I am floored that this paper was published by the Proceedings of the National Academy of Sciences. This journal has a high impact factor, and frankly, anything that passes through its review process should be stellar. This paper troubles me on two grounds.
The first is ethical. Whatever Facebook says in its terms of service, the participants of this study absolutely did not give informed consent, the whole point of which is to give people clear awareness of what they can expect from the experiment for which they are voluntarily providing data. These protections are not set up so that researchers can "scrape by" by not tracking individual names (which most research doesn't do anyway). Informed consent matters.
The second is statistical. Yes, it's an experiment with tons of people. This is good because it means you're not likely to miss an effect that exists. But it can be misleading because a very high number of participants will render a tiny effect statistically significant. Cohen's d is a measure of effect size, or how much does this experimental variable (e.g., caffeine) really affect what we're measuring (e.g., alertness). A Cohen's d value of anything greater than d = 0.8 is a large effect, such that this manipulation drastically affected most people. d = 0.2 is a small effect, such that people experienced a small boost.
The effect sizes of the manipulations here are between d = 0.0001 and d = 0.02. You can see that it's a small effect in their Figure 1: we're talking about differences of less than 1% of the number of positive or negative words people generated. The statistical significance of p < .05 is driven by the massive number of participants rather than a large effect. The effect may be real, but it is so small that it's difficult to draw real conclusions. The authors do mention that it's a small effect, but I'd like to highlight that more strongly here.
posted by nicodine at 10:03 AM on June 28, 2014 [38 favorites]
The first is ethical. Whatever Facebook says in its terms of service, the participants of this study absolutely did not give informed consent, the whole point of which is to give people clear awareness of what they can expect from the experiment for which they are voluntarily providing data. These protections are not set up so that researchers can "scrape by" by not tracking individual names (which most research doesn't do anyway). Informed consent matters.
The second is statistical. Yes, it's an experiment with tons of people. This is good because it means you're not likely to miss an effect that exists. But it can be misleading because a very high number of participants will render a tiny effect statistically significant. Cohen's d is a measure of effect size, or how much does this experimental variable (e.g., caffeine) really affect what we're measuring (e.g., alertness). A Cohen's d value of anything greater than d = 0.8 is a large effect, such that this manipulation drastically affected most people. d = 0.2 is a small effect, such that people experienced a small boost.
The effect sizes of the manipulations here are between d = 0.0001 and d = 0.02. You can see that it's a small effect in their Figure 1: we're talking about differences of less than 1% of the number of positive or negative words people generated. The statistical significance of p < .05 is driven by the massive number of participants rather than a large effect. The effect may be real, but it is so small that it's difficult to draw real conclusions. The authors do mention that it's a small effect, but I'd like to highlight that more strongly here.
posted by nicodine at 10:03 AM on June 28, 2014 [38 favorites]
Welp, I'm avoiding 'positive' or 'negative' postings and staying strictly with 'vapid'.
In other words: stay the course
posted by mazola at 10:07 AM on June 28, 2014 [4 favorites]
In other words: stay the course
posted by mazola at 10:07 AM on June 28, 2014 [4 favorites]
I am floored that this paper was published by the Proceedings of the National Academy of Sciences.
This paper had a leg up because of the increasing buzzification of academic publication. In this case, the latest fashion is the unfortunately named Big Data. Theory? We don't need no stinking theory! We've got Big Data! And, maybe in this case, it was also "Ethics? We don't need no..."
posted by mondo dentro at 10:07 AM on June 28, 2014 [2 favorites]
This paper had a leg up because of the increasing buzzification of academic publication. In this case, the latest fashion is the unfortunately named Big Data. Theory? We don't need no stinking theory! We've got Big Data! And, maybe in this case, it was also "Ethics? We don't need no..."
posted by mondo dentro at 10:07 AM on June 28, 2014 [2 favorites]
Rtha, I was merely giving an example of when a journal might make an exception to their irb policy. I explicitly said I did not have any manipulations. Also, showing a video of a cat and measuring happiness is an experiment and includes a manipulation. And I don't do research of that type at all. And what is a "semantic survey"? Please stop trying to school me on research methods.
posted by k8t at 10:12 AM on June 28, 2014
posted by k8t at 10:12 AM on June 28, 2014
In Canada, they'd be in deep, deep shit right now. Consent is more than a click-through license. Doesn't the United States have ethics councils?
posted by Yowser at 10:12 AM on June 28, 2014
posted by Yowser at 10:12 AM on June 28, 2014
As a general rule of thumb, the answer to "Doesn't the United States have $THING?" is usually "no".
posted by Elementary Penguin at 10:14 AM on June 28, 2014 [5 favorites]
posted by Elementary Penguin at 10:14 AM on June 28, 2014 [5 favorites]
Unless $THING is something having to do with the military: then the answer is usually "We have ten times as many as our military forces actually asked for!"
posted by XMLicious at 10:16 AM on June 28, 2014 [4 favorites]
posted by XMLicious at 10:16 AM on June 28, 2014 [4 favorites]
How do you all feel about the Pepsi Challenge? Is it okay that other businesses manipulate people in order to increase their profits?
posted by k8t at 10:18 AM on June 28, 2014
posted by k8t at 10:18 AM on June 28, 2014
In Canada, they'd be in deep, deep shit right now.
Are you sure? This type of research is new, as are the accepted rules. That's sort of what the debate is about.
Doesn't the United States have ethics councils?
Yep. Institutional Review Boards.
posted by mondo dentro at 10:19 AM on June 28, 2014
Are you sure? This type of research is new, as are the accepted rules. That's sort of what the debate is about.
Doesn't the United States have ethics councils?
Yep. Institutional Review Boards.
posted by mondo dentro at 10:19 AM on June 28, 2014
It it okay that other businesses manipulate people in order to increase their profits?
No. But we've accepted this constant manipulation, deception, and outright lying as "normal" in our wider society. We view it necessary for "prosperity". Do you buy that, though? Our artists, for example, have frequently railed at the ethical bankruptcy of this state of affairs.
So far, we've managed to keep this sort of thing at least at bay in academia. I'd like to keep it that way. Corporate values are not academic values.
posted by mondo dentro at 10:25 AM on June 28, 2014 [4 favorites]
No. But we've accepted this constant manipulation, deception, and outright lying as "normal" in our wider society. We view it necessary for "prosperity". Do you buy that, though? Our artists, for example, have frequently railed at the ethical bankruptcy of this state of affairs.
So far, we've managed to keep this sort of thing at least at bay in academia. I'd like to keep it that way. Corporate values are not academic values.
posted by mondo dentro at 10:25 AM on June 28, 2014 [4 favorites]
How do you all feel about the Pepsi Challenge? Is it okay that other businesses manipulate people in order to increase their profits?
Pepsi Challenge is an entirely different beast. For a better analogy:
Let's say you normally drink soda every day. You receive your soda through X Corp, who delivers a case every day. You know that X Corp occasionally tweaks its formula based on what it thinks you like, and past performance. It's a little annoying, but whatever-you're under the impression that it does it to (1) make the soda more enjoyable for you so you drink more of it, and (2) get you to buy more soda, both of which you're mostly OK with, because you like the soda.
Then you find out that for the past three months, X Corp has been adding slightly more of ingredient Y to your soda as part of a research experience to see if it makes you sadder, without your knowledge. You had no idea that X Corp was trying to mess with your emotions.
I think most people would, understandably, be a little pissed off.
posted by damayanti at 10:25 AM on June 28, 2014 [22 favorites]
Pepsi Challenge is an entirely different beast. For a better analogy:
Let's say you normally drink soda every day. You receive your soda through X Corp, who delivers a case every day. You know that X Corp occasionally tweaks its formula based on what it thinks you like, and past performance. It's a little annoying, but whatever-you're under the impression that it does it to (1) make the soda more enjoyable for you so you drink more of it, and (2) get you to buy more soda, both of which you're mostly OK with, because you like the soda.
Then you find out that for the past three months, X Corp has been adding slightly more of ingredient Y to your soda as part of a research experience to see if it makes you sadder, without your knowledge. You had no idea that X Corp was trying to mess with your emotions.
I think most people would, understandably, be a little pissed off.
posted by damayanti at 10:25 AM on June 28, 2014 [22 favorites]
explicitly said I did not have any manipulations. Also, showing a video of a cat and measuring happiness is an experiment and includes a manipulation.
Yeah, I phrased that badly and failed to catch it on preview. I meant to phrase it so it was clear that data would be collected on what people said after seeing an unmanipulated feed.
But I'm glad to see that you do recognize the difference between what you did and what these researchers did, although it continues to trouble me that you seem entirely untroubled by what they did.
posted by rtha at 10:26 AM on June 28, 2014 [2 favorites]
Yeah, I phrased that badly and failed to catch it on preview. I meant to phrase it so it was clear that data would be collected on what people said after seeing an unmanipulated feed.
But I'm glad to see that you do recognize the difference between what you did and what these researchers did, although it continues to trouble me that you seem entirely untroubled by what they did.
posted by rtha at 10:26 AM on June 28, 2014 [2 favorites]
For the record, I am also a social scientist, with colleagues who also work with social media data. General consensus in *my* news feed was "Cool study! Kind of questionable data collection, ethically".
Collecting data from people's news feeds without their consent? Sure! Publicly observable behavior and all that. Manipulating it without consent? Not so great.
posted by damayanti at 10:28 AM on June 28, 2014 [2 favorites]
Collecting data from people's news feeds without their consent? Sure! Publicly observable behavior and all that. Manipulating it without consent? Not so great.
posted by damayanti at 10:28 AM on June 28, 2014 [2 favorites]
Look, I am sitting in an apartment where there is a camera pointing at me right now, my Internet is traced, and I have an agent follow me and my kid everywhere but I do it in the name of research. Maybe this is a bit of a humble brag-sounding reply, but perhaps I am a bit less surprised at the lengths people will go to for their academic work.
posted by k8t at 10:30 AM on June 28, 2014 [1 favorite]
posted by k8t at 10:30 AM on June 28, 2014 [1 favorite]
How do you all feel about the Pepsi Challenge? Is it okay that other businesses manipulate people in order to increase their profits?
(By virally getting people to look up product-related things they'd otherwise have been unaware of, for example?)
No, it's not ethical that businesses manipulate people to increase their profits. It's especially unethical to manipulate people who have explicitly not consented to being manipulated, and of course businesses avoid such problems by not informing people they're doing so or blurring the lines by declaring that something procedural like clicking through a contract gets them a blank check.
posted by XMLicious at 10:30 AM on June 28, 2014
Well, I guess now we know one of the new features of Facebook Premium™.
Later that day I got an email from Facebook saying "We noticed you logged in and didn't do anything!" followed by some cool suggestions.
Facebook watches everything you do, and it's programmed to be very needy. I find it hilarious that it makes a ham-handed efforts to lure me back if I don't use it for a few days: "you missed N status updates on your feed!", "do you know this person that you don't know?", and the always great "you better log on now, X posted a picture!"
posted by cosmic.osmo at 10:31 AM on June 28, 2014 [1 favorite]
Later that day I got an email from Facebook saying "We noticed you logged in and didn't do anything!" followed by some cool suggestions.
Facebook watches everything you do, and it's programmed to be very needy. I find it hilarious that it makes a ham-handed efforts to lure me back if I don't use it for a few days: "you missed N status updates on your feed!", "do you know this person that you don't know?", and the always great "you better log on now, X posted a picture!"
posted by cosmic.osmo at 10:31 AM on June 28, 2014 [1 favorite]
Apparently, the fact that AVClub broke this is having the effect of blunti g the impactof the news, as a lot of people are forming the mistaken impression it's an Onion satire. My wife has said several of her FB friends responded by commenting on how it's just another Onion parody story. Brilliant as performance art, horrible as reality.
posted by saulgoodman at 10:31 AM on June 28, 2014 [4 favorites]
posted by saulgoodman at 10:31 AM on June 28, 2014 [4 favorites]
Mod note: If you're personally connected to the subject of a post, it's a good idea to speak your piece and then move on, not set yourself up as the defender of their honor. Thanks.
posted by restless_nomad (staff) at 10:31 AM on June 28, 2014 [3 favorites]
posted by restless_nomad (staff) at 10:31 AM on June 28, 2014 [3 favorites]
Is it okay that other businesses manipulate people in order to increase their profits?
What's the nature (scope/degree/duration) of the manipulation? That's why informed consent is an imperative.
posted by lazycomputerkids at 10:32 AM on June 28, 2014 [1 favorite]
What's the nature (scope/degree/duration) of the manipulation? That's why informed consent is an imperative.
posted by lazycomputerkids at 10:32 AM on June 28, 2014 [1 favorite]
... perhaps I am a bit less surprised at the lengths people will go to for their academic work.
Well, I know Godwin bait when I see it.
posted by mondo dentro at 10:33 AM on June 28, 2014 [9 favorites]
Well, I know Godwin bait when I see it.
posted by mondo dentro at 10:33 AM on June 28, 2014 [9 favorites]
I'm actually afraid to favorite that.
posted by George_Spiggott at 10:36 AM on June 28, 2014 [1 favorite]
posted by George_Spiggott at 10:36 AM on June 28, 2014 [1 favorite]
New Internet mantra: "Be kind, for everyone you meet is fighting a hard battle against whatever blind study their Facebook feed is currently in"
posted by mathowie at 10:36 AM on June 28, 2014 [42 favorites]
posted by mathowie at 10:36 AM on June 28, 2014 [42 favorites]
Would this qualify as the tort of intentional infliction of emotional distress?
posted by humanfont at 10:54 AM on June 28, 2014 [5 favorites]
posted by humanfont at 10:54 AM on June 28, 2014 [5 favorites]
Look, I am sitting in an apartment where there is a camera pointing at me right now, my Internet is traced, and I have an agent follow me and my kid everywhere but I do it in the name of research. Maybe this is a bit of a humble brag-sounding reply, but perhaps I am a bit less surprised at the lengths people will go to for their academic work.
You consented to that! You had information about what you were getting into and you got to say "okay, sign me up for that gig!"
I didn't get to do that for this piece of research. This is not in the least comparable.
posted by rtha at 10:54 AM on June 28, 2014 [17 favorites]
You consented to that! You had information about what you were getting into and you got to say "okay, sign me up for that gig!"
I didn't get to do that for this piece of research. This is not in the least comparable.
posted by rtha at 10:54 AM on June 28, 2014 [17 favorites]
Are you saying that Jamie and Jeff didn't consent to doing this study? Again, you're not understanding me. I said... People go to great lengths for their research projects (I am a researcher, not the subject). So when you (collective you) wonder what these people were thinking with this, I'm saying... People go to great lengths for their research that some people may view as irrational or illogical.
posted by k8t at 11:00 AM on June 28, 2014
posted by k8t at 11:00 AM on June 28, 2014
framing this as Facebook being evil is really a slap in the face
No framing for the evil is needed beyond:
"They trust me — dumb fucks,"
posted by rough ashlar at 11:06 AM on June 28, 2014 [9 favorites]
No framing for the evil is needed beyond:
"They trust me — dumb fucks,"
posted by rough ashlar at 11:06 AM on June 28, 2014 [9 favorites]
Would this qualify as the tort of intentional infliction of emotional distress?
Oh, my. That's a very good point.
posted by cgs06 at 11:15 AM on June 28, 2014 [6 favorites]
Oh, my. That's a very good point.
posted by cgs06 at 11:15 AM on June 28, 2014 [6 favorites]
It'd be amazing to see this done in a few years with augmented reality glasses. One person walks down a street in an unfamilar town, and the heads-up-display indicates the location of "Dumpster", "Crack Den", "Payday Loans". Another more favored individual wearing the same glasses and walking down the exact same street sees "Florist", "Cafe", "Top-Rated Preschool"...
posted by George_Spiggott at 11:19 AM on June 28, 2014 [2 favorites]
posted by George_Spiggott at 11:19 AM on June 28, 2014 [2 favorites]
I have been increasingly frustrated with FB's manipulation of my main page anyway by obscure algorithms, but also, my post about this issue dropped off my home page immediately. It's still on my personal page, where I am having a discussion with another FB friend about it, but it's not anywhere on the home page. Older posts from other people are still there. Of course, I don't know what my other FB people see on their home pages, so how can I judge? I keep wanting it to be like Metafilter where it's strictly time-based. Old things drop, new things are at the top. Instead things you like appear and vanish or keep reappearing long after you've stopped caring.
It's probably just time to drop Facebook. I really like the idea of it, have enjoyed the communication I have through it, but the execution is making it less and less workable and more and more creepy.
posted by emjaybee at 11:28 AM on June 28, 2014 [2 favorites]
It's probably just time to drop Facebook. I really like the idea of it, have enjoyed the communication I have through it, but the execution is making it less and less workable and more and more creepy.
posted by emjaybee at 11:28 AM on June 28, 2014 [2 favorites]
So when you (collective you) wonder what these people were thinking with this, I'm saying... People go to great lengths for their research that some people may view as irrational or illogical.
This odd reframing might suit your friends (or, if you'd rather, the "respected academic researchers" that you travel with), but it doesn't describe what anyone is actually saying.
People aren't saying, wow, those "respected academic researchers" work way too hard, isn't it irrational that people would go to such lengths for their research?
People are saying, wow, these turkeys have been acting unethically. It's not irrational or illogical, it is wrong.
posted by grouse at 11:34 AM on June 28, 2014 [15 favorites]
This odd reframing might suit your friends (or, if you'd rather, the "respected academic researchers" that you travel with), but it doesn't describe what anyone is actually saying.
People aren't saying, wow, those "respected academic researchers" work way too hard, isn't it irrational that people would go to such lengths for their research?
People are saying, wow, these turkeys have been acting unethically. It's not irrational or illogical, it is wrong.
posted by grouse at 11:34 AM on June 28, 2014 [15 favorites]
Unlike 1.4 billion other people on the planet, I am not a user of facebook precisely for these reasons (I chose not to be a data source for FB). Like others have said above, I am astonished that agreeing to FB terms when signing up can be construed as consent to participate in an experiment. The editor who handled the manuscript (Susan Fiske at PNAS) is a highly respected social psychologist and I am astonished that this was acceptable as human subject consent.
Then again, the A/B method manipulates a variable and collects data on behavior by people who saw A vs people who saw B. Maybe the web is just one big social science experiment and connecting to your ISP counts as consent.
posted by bluesky43 at 11:41 AM on June 28, 2014 [2 favorites]
Then again, the A/B method manipulates a variable and collects data on behavior by people who saw A vs people who saw B. Maybe the web is just one big social science experiment and connecting to your ISP counts as consent.
posted by bluesky43 at 11:41 AM on June 28, 2014 [2 favorites]
I just figured out, I think that k8t is saying that some dangerous aspect of her study of technology use in authoritarian states results in a personal threat to her and her child. The camera, monitored internet, and agents to follow herself and k₈t are there for protection from whatever the threat caused by pursuing her research is, not components of performing research with k8t and k₈t as the study's subjects.
posted by XMLicious at 11:58 AM on June 28, 2014
posted by XMLicious at 11:58 AM on June 28, 2014
I guess I don't understand the outrage here. If you go into a Starbucks, you are walking into a designed environment that has been engineered to increase a corporation's profits. If Starbucks decides for whatever reason to deliberately offer different beverages, different music, different prices, and a different layout in different locations, do they really need to go to an IRB to get permission? If the researchers analyzing the resulting data then publish their results and make them publicly available to everyone, is that more evil than just keeping it in-house?
posted by leopard at 12:02 PM on June 28, 2014
posted by leopard at 12:02 PM on June 28, 2014
Perhaps you missed all the comments above about the difference between corporations in pursuit of profits vs. researchers acting in pursuit of knowledge. Academia holds itself to a higher standard, from professional pride, awareness of a long history of ethics mistakes, and self-preservation.
posted by Nelson at 12:07 PM on June 28, 2014 [5 favorites]
posted by Nelson at 12:07 PM on June 28, 2014 [5 favorites]
leopard: To some degree, I think you're right; there's a similar comparison to be made with supermarkets maximising profits by blowing smells around and placing products in your line of sight. I suspect there are a few reasons why people are more upset with Facebook's study, including:
1) The extremely bald, direct framing of the study: "can we manipulate your emotions?" That outcome may also be a consequence of what Starbucks et al do, but I almost feel like I'd be happier (hah) if the paper was about making people click on ads more.
2) The manipulation of the news feed. Yes, the news feed is manipulated in lots of ways - to reduce clutter and spam, to maximise user engagement, etc - and you could argue those goals are shared by users. But manipulating it in order to provoke sadness or happiness feels even more at odds with the reasons why we're on Facebook; to find out about our friends and family, to distract ourselves, to show off, etc.
3) The novelty of the environment. When we're in Starbucks or Walmart, we're there to buy stuff and we know that they want to sell us stuff. It's more obvious that there is both co-operation and competition going on there. But the fact that Facebook is 'free' can confuse the issue; aren't they supposed to be on our side? I didn't expect them to try and make me happier or sadder! I suppose you could say that people shouldn't be so naive, but I'd rather not live in a world where we're continually suspicious of each other.
posted by adrianhon at 12:14 PM on June 28, 2014 [4 favorites]
1) The extremely bald, direct framing of the study: "can we manipulate your emotions?" That outcome may also be a consequence of what Starbucks et al do, but I almost feel like I'd be happier (hah) if the paper was about making people click on ads more.
2) The manipulation of the news feed. Yes, the news feed is manipulated in lots of ways - to reduce clutter and spam, to maximise user engagement, etc - and you could argue those goals are shared by users. But manipulating it in order to provoke sadness or happiness feels even more at odds with the reasons why we're on Facebook; to find out about our friends and family, to distract ourselves, to show off, etc.
3) The novelty of the environment. When we're in Starbucks or Walmart, we're there to buy stuff and we know that they want to sell us stuff. It's more obvious that there is both co-operation and competition going on there. But the fact that Facebook is 'free' can confuse the issue; aren't they supposed to be on our side? I didn't expect them to try and make me happier or sadder! I suppose you could say that people shouldn't be so naive, but I'd rather not live in a world where we're continually suspicious of each other.
posted by adrianhon at 12:14 PM on June 28, 2014 [4 favorites]
Appalling but not surprising om FB's part. But I think the National Academy of Sciences is getting a free pass. In my opinion, they are even more contemptible for publishing it. More contemptible, because they ought to be taking a close look at ethical breaches in any submission. This is not a close call. A transparent consent of its suBjects is a fundamental obligation of any human researcher, and assuring this is equally fundamental for editors presiding over what gets published.
posted by Jeff Dewey at 12:15 PM on June 28, 2014 [3 favorites]
posted by Jeff Dewey at 12:15 PM on June 28, 2014 [3 favorites]
Also: the scale of the study, and its pinpoint nature (tweaking everyone's feeds in a different way) is very different from Starbucks moving the seats around in a few dozen cafes and seeing how people react.
posted by adrianhon at 12:17 PM on June 28, 2014 [1 favorite]
posted by adrianhon at 12:17 PM on June 28, 2014 [1 favorite]
If I had tried this in graduate school, the institutional review board would have crapped bricks.
But they took a page out of Stanley Milgram!
I sort of wondered why the feed of on the upper right hand side didn't jive with the main one...duh...
posted by Alexandra Kitty at 12:17 PM on June 28, 2014
But they took a page out of Stanley Milgram!
I sort of wondered why the feed of on the upper right hand side didn't jive with the main one...duh...
posted by Alexandra Kitty at 12:17 PM on June 28, 2014
Nelson: So if Starbucks runs the experiments on their own, are people with academic affiliations ethically obligated to refuse to look at their data and are research journals ethically obligated to not publish the resulting analysis? I'm not trying to be snarky.
Facebook is obviously capable of paying academic researchers to work for them full-time, presumably this corporation/university collaboration largely serves as a mutual back-scratching exercise that benefits all the parties involved (academics get to work with unique data sets and publish a high-profile paper, Facebook gets to advertise opportunities to academics, etc). But the ethical issue almost seems to be more between Facebook and its users -- people think they're getting a "natural" user experience while Facebook is of course very interested in engineering it to their benefit (which is not necessarily to the detriment of the users -- Facebook is very interested in having the users love the site).
posted by leopard at 12:19 PM on June 28, 2014
Facebook is obviously capable of paying academic researchers to work for them full-time, presumably this corporation/university collaboration largely serves as a mutual back-scratching exercise that benefits all the parties involved (academics get to work with unique data sets and publish a high-profile paper, Facebook gets to advertise opportunities to academics, etc). But the ethical issue almost seems to be more between Facebook and its users -- people think they're getting a "natural" user experience while Facebook is of course very interested in engineering it to their benefit (which is not necessarily to the detriment of the users -- Facebook is very interested in having the users love the site).
posted by leopard at 12:19 PM on June 28, 2014
Some interesting comments on the Verge article, suggesting that "Facebook probably violated CFR 50.25 regarding informed consent of test subjects, particularly as the test itself intentionally had a detrimental side effect as a desired outcome," and also that:
it violates Title 45 CFR Part 46 on the protection of human subjectsposted by adrianhon at 12:25 PM on June 28, 2014 [13 favorites]
a) informed consent (aka, “the TOS argument”) might have been lapsed if beyond a year of consent (§46.103.b.4.ii)
b) that participants are debriefed after participation (§46.116.d.4)
c) minors may have been included without parental consent (§46.408)
Jesus christ people! Forget the theoretical implications! The studies own claim is it worked! FB made some real people sad deliberately without their consent. That's social violence, whether it might be possible to kearn from it, too, or not. Sociopaths are always conducting little social expirements to learn how to better manipulate their victims. We might learn from their experiments, too, but that doesn't justify their actions or make them not socially violent behaviors or make them legitimate research methods!
posted by saulgoodman at 12:29 PM on June 28, 2014 [40 favorites]
posted by saulgoodman at 12:29 PM on June 28, 2014 [40 favorites]
k8t: Facebook can do whatever the hell it wants with its property. I don't like that, but it is how it is.
Unfortunately Facebook views the emotional state and individual personalities of the user base as "property".
posted by Ray Walston, Luck Dragon at 12:32 PM on June 28, 2014 [5 favorites]
Unfortunately Facebook views the emotional state and individual personalities of the user base as "property".
posted by Ray Walston, Luck Dragon at 12:32 PM on June 28, 2014 [5 favorites]
Facebook probably violated CFR 50.25 regarding informed consent of test subjects, particularly as the test itself intentionally had a detrimental side effect as a desired outcome.
Unfortunately, I don't think they violated 21 CFR 50 because (IIRC) the scope of the regulation covers FDA-regulated research as well as government-sponsored research. If the two academics took government money to do this research, then I think there would be a violation; otherwise, I don't think it would be the case.
posted by cgs06 at 12:36 PM on June 28, 2014
Unfortunately, I don't think they violated 21 CFR 50 because (IIRC) the scope of the regulation covers FDA-regulated research as well as government-sponsored research. If the two academics took government money to do this research, then I think there would be a violation; otherwise, I don't think it would be the case.
posted by cgs06 at 12:36 PM on June 28, 2014
This like discoverring that Starbucks was pissing in your Latte in the name of science. Then published a paper with their findings correlating urine tainted coffee with Starbucks wifi use. Then being shocked that customers were bothered about it and fell back on the terms of sale printed on try receipt as an excuse and furthermore we ought not worry because the urine was sterile. Don't get me wrong Starbucks has done this test, but they aren't publishing the results. Don't want McDonald's to start using what we call the mermaid's secret sauce. Also PR, Marketing and Legal are run by component professionals.
posted by humanfont at 12:39 PM on June 28, 2014 [10 favorites]
posted by humanfont at 12:39 PM on June 28, 2014 [10 favorites]
Stupid tablet spell check.
posted by saulgoodman at 12:40 PM on June 28, 2014
posted by saulgoodman at 12:40 PM on June 28, 2014
Also PR, Marketing and Legal are run by component professionals.
Little Lego lawyers and marketing and PR execs! How cute!
posted by XMLicious at 12:42 PM on June 28, 2014 [1 favorite]
No, not exactly like that. For one thing, the urine wasn't sterile. They actively harmed people.
For another, they weren't testing whether urine in your coffee correlates to wifi use. They were testing whether unsterile urine in your coffee harms you. Turns out it does.
posted by Flunkie at 12:49 PM on June 28, 2014 [1 favorite]
For another, they weren't testing whether urine in your coffee correlates to wifi use. They were testing whether unsterile urine in your coffee harms you. Turns out it does.
posted by Flunkie at 12:49 PM on June 28, 2014 [1 favorite]
Are you saying that Jamie and Jeff didn't consent to doing this study?
They consented but didn't seem to care about giving their subjects the same courtesy. The more often you make it clear that you either don't regard this as important or you don't care about it the more disturbing I find it.
posted by rtha at 12:57 PM on June 28, 2014 [25 favorites]
They consented but didn't seem to care about giving their subjects the same courtesy. The more often you make it clear that you either don't regard this as important or you don't care about it the more disturbing I find it.
posted by rtha at 12:57 PM on June 28, 2014 [25 favorites]
I said some stuff about this in the "ironic racism" thread.
posted by Small Dollar at 1:13 PM on June 28, 2014
posted by Small Dollar at 1:13 PM on June 28, 2014
No, not exactly like that. For one thing, the urine wasn't sterile. They actively harmed people.
It's more like they put less sugar in your coffee, and then found that you liked it less.
In any circumstance, Facebook doesn't show you every post that could possibly be in your feed. In the "pissing in your coffee" scenario, they tinkered with their display to make it less likely that a positively-worded post would be shown to you. I guess this is similar to the Tuskegee experiment. (It's not like anyone has ever complained that Facebook encourages people to put forward their unrealistic "best selves" and that this creates unrealistic social expectations that cause more depression or whatever.)
I haven't even gotten to the crappy social science part of this -- virtually every social science study that I've seen linked on Metafilter has been overhyped. It looks to me that people typically use about 5.25% positive words in their posts, but when Nazis shit in their coffee and cut them open without permission, this drops to about 5.1%.
posted by leopard at 1:16 PM on June 28, 2014
It's more like they put less sugar in your coffee, and then found that you liked it less.
In any circumstance, Facebook doesn't show you every post that could possibly be in your feed. In the "pissing in your coffee" scenario, they tinkered with their display to make it less likely that a positively-worded post would be shown to you. I guess this is similar to the Tuskegee experiment. (It's not like anyone has ever complained that Facebook encourages people to put forward their unrealistic "best selves" and that this creates unrealistic social expectations that cause more depression or whatever.)
I haven't even gotten to the crappy social science part of this -- virtually every social science study that I've seen linked on Metafilter has been overhyped. It looks to me that people typically use about 5.25% positive words in their posts, but when Nazis shit in their coffee and cut them open without permission, this drops to about 5.1%.
posted by leopard at 1:16 PM on June 28, 2014
You're the one comparing it to the Tuskegee experiment, not anyone else.
The analogies to coffee and urine and sugar, while amusing, are not super enlightening. There are lots of different elements of this study that people have objected to, but perhaps the most important one is the lack of informed consent. I don't think people would be quite as up in arms if Facebook had asked users to opt-in to a psychological study and then told them the details afterwards. Probably wouldn't have hurt if they gave participants some virtual coins or whatever.
As for the crappiness of this study, that has also been pointed out in the comments here, as is the case in other posts.
posted by adrianhon at 1:23 PM on June 28, 2014
The analogies to coffee and urine and sugar, while amusing, are not super enlightening. There are lots of different elements of this study that people have objected to, but perhaps the most important one is the lack of informed consent. I don't think people would be quite as up in arms if Facebook had asked users to opt-in to a psychological study and then told them the details afterwards. Probably wouldn't have hurt if they gave participants some virtual coins or whatever.
As for the crappiness of this study, that has also been pointed out in the comments here, as is the case in other posts.
posted by adrianhon at 1:23 PM on June 28, 2014
leopard, come on. Are you really saying you don't see any difference between corporate manipulation via advertising and academic research? So you don't mind it in this case. OK, I'm more or less with you. It's pretty weak tea. But what if the manipulations were slightly harsher--say in a study designed to study online bullying--and a dozen teenagers offed themselves afterwards?
You've got to remember, the criticism is not just about this one particular case. It's about a body of protocols and procedures to be carried out across the board for all studies involving human subjects. Every university has something like an "Office of Research Protections" to make sure exactly these protocols are followed.
posted by mondo dentro at 1:25 PM on June 28, 2014 [3 favorites]
You've got to remember, the criticism is not just about this one particular case. It's about a body of protocols and procedures to be carried out across the board for all studies involving human subjects. Every university has something like an "Office of Research Protections" to make sure exactly these protocols are followed.
posted by mondo dentro at 1:25 PM on June 28, 2014 [3 favorites]
It's more like they put less sugar in your coffee, and then found that you liked it less.
So what you're saying is you'll object to nonconsensual mass psychological experiments once Facebook gets to the point of collaborating with researchers effective enough to manipulate 0.7 million people into some statistically significant increases in clinical depression or perhaps violence?
posted by crayz at 1:35 PM on June 28, 2014 [2 favorites]
So what you're saying is you'll object to nonconsensual mass psychological experiments once Facebook gets to the point of collaborating with researchers effective enough to manipulate 0.7 million people into some statistically significant increases in clinical depression or perhaps violence?
posted by crayz at 1:35 PM on June 28, 2014 [2 favorites]
So if Starbucks runs the experiments on their own, are people with academic affiliations ethically obligated to refuse to look at their data and are research journals ethically obligated to not publish the resulting analysis?
I don't know, but it's at least a question I hope a researcher would ask themselves. As I've said several times in this thread I'm glad Facebook did this study, I just wish they had a little more respect for their lab rat population. I don't think the harm they did in their experiment was that severe and probably well within the boundaries of all the other choices go into the feed selection algorithm. But given Facebook's generally sleazy ethics surrounding user data having their fancy scientists glibly publish a study like this without even a nod to ethical concerns is irritating. (And guess what, this blew up on Saturday morning, so we won't hear anything soon.)
I'm still processing nicodine's comment that the actual effect they demonstrated is tiny. The paper says as much. I can't decide how reliable their measurement of the effect size is, the experiment design introduces so many extra factors. My memory from Riot's League of Legends work is they claim the contagion effect is quite strong. Unfortunately they've only given talks, not published peer-reviewed papers, so the research isn't as solid.
posted by Nelson at 1:40 PM on June 28, 2014 [2 favorites]
I don't know, but it's at least a question I hope a researcher would ask themselves. As I've said several times in this thread I'm glad Facebook did this study, I just wish they had a little more respect for their lab rat population. I don't think the harm they did in their experiment was that severe and probably well within the boundaries of all the other choices go into the feed selection algorithm. But given Facebook's generally sleazy ethics surrounding user data having their fancy scientists glibly publish a study like this without even a nod to ethical concerns is irritating. (And guess what, this blew up on Saturday morning, so we won't hear anything soon.)
I'm still processing nicodine's comment that the actual effect they demonstrated is tiny. The paper says as much. I can't decide how reliable their measurement of the effect size is, the experiment design introduces so many extra factors. My memory from Riot's League of Legends work is they claim the contagion effect is quite strong. Unfortunately they've only given talks, not published peer-reviewed papers, so the research isn't as solid.
posted by Nelson at 1:40 PM on June 28, 2014 [2 favorites]
mondo dentro: I understand that there is a well-developed social norm in academia that consent must be provided by human subjects, and that there is a good reason for this norm. But from an overall moralistic perspective, I would expect experiments run by businesses to have greater social costs and lesser social benefits (think of data collection by casinos), and yet there is no social norm in business that consent must be provided by human subjects, which is why I brought up Starbucks. This lack of consistency seems to create an "arbitrage opportunity" for researchers.
There just seem to be a whole set of inconsistencies here. Why is *not* publishing more ethical than publishing? Why would Facebook doing this entirely in-house be more ethical than them collaborating with people in academia? If Starbucks does not need consent to vary the amount of sugar in your coffee, why does Facebook need consent to vary the algorithm that dictates the appearance of your News Feed? Why is Facebook manipulating you to try to advance a research agenda worse than Facebook manipulating you so that shareholders will enjoy better returns next quarter?
I guess I understand the answer to these questions -- all these actions threaten the existing academic social norm. But from a utilitarian perspective, this discussion doesn't make any sense. See, I just don't get this:
So what you're saying is you'll object to nonconsensual mass psychological experiments once Facebook gets to the point of collaborating with researchers effective enough to manipulate 0.7 million people into some statistically significant increases in clinical depression or perhaps violence?
So what you're saying is that it's perfectly OK for Facebook to run mass psychological experiments to get people to click on ads or play addictive video games or to behave in some way that will enhance some middle manager's bonus, but if they attempt to address a major open issue in social science they are crossing some critical ethical divide that must not be breached? It's OK for a casino to run experiments to figure out how to make sure people at risk for gambling addiction are fully drained of their money -- because hey, that's business! -- but if some analyst at a casino wanted to publish a paper that might have a major impact on social science research, that's just totally wrong?
posted by leopard at 1:43 PM on June 28, 2014
There just seem to be a whole set of inconsistencies here. Why is *not* publishing more ethical than publishing? Why would Facebook doing this entirely in-house be more ethical than them collaborating with people in academia? If Starbucks does not need consent to vary the amount of sugar in your coffee, why does Facebook need consent to vary the algorithm that dictates the appearance of your News Feed? Why is Facebook manipulating you to try to advance a research agenda worse than Facebook manipulating you so that shareholders will enjoy better returns next quarter?
I guess I understand the answer to these questions -- all these actions threaten the existing academic social norm. But from a utilitarian perspective, this discussion doesn't make any sense. See, I just don't get this:
So what you're saying is you'll object to nonconsensual mass psychological experiments once Facebook gets to the point of collaborating with researchers effective enough to manipulate 0.7 million people into some statistically significant increases in clinical depression or perhaps violence?
So what you're saying is that it's perfectly OK for Facebook to run mass psychological experiments to get people to click on ads or play addictive video games or to behave in some way that will enhance some middle manager's bonus, but if they attempt to address a major open issue in social science they are crossing some critical ethical divide that must not be breached? It's OK for a casino to run experiments to figure out how to make sure people at risk for gambling addiction are fully drained of their money -- because hey, that's business! -- but if some analyst at a casino wanted to publish a paper that might have a major impact on social science research, that's just totally wrong?
posted by leopard at 1:43 PM on June 28, 2014
This lack of consistency seems to create an "arbitrage opportunity" for researchers.
Exactly. That's the problem. I actually don't think the problem is with FB. I don't expect them to have high ethical standards. After all, their entire business model is based on advertising which is an ethical abomination, particularly in its Big Data, algorithmically enhanced form (cue the obligatory Bill Hicks bit). The problem is with the academic side, including the researchers, their institutions, and the journal. And when I say there's a problem, I'm not saying I know this was unethical or that wrongdoing was definitely done or that these researchers should have their careers ruined. Far from it. It's one study using a new type of approach. There are gray areas here. Mistakes happen. But it needs to be reviewed. I don't see any evidence that the authors even considered standard human subject ethical issues, and that's something that really, really needs to be looked at carefully and with transparency by the academic community. We're witnessing a classic slippery slope scenario.
posted by mondo dentro at 1:58 PM on June 28, 2014 [3 favorites]
Exactly. That's the problem. I actually don't think the problem is with FB. I don't expect them to have high ethical standards. After all, their entire business model is based on advertising which is an ethical abomination, particularly in its Big Data, algorithmically enhanced form (cue the obligatory Bill Hicks bit). The problem is with the academic side, including the researchers, their institutions, and the journal. And when I say there's a problem, I'm not saying I know this was unethical or that wrongdoing was definitely done or that these researchers should have their careers ruined. Far from it. It's one study using a new type of approach. There are gray areas here. Mistakes happen. But it needs to be reviewed. I don't see any evidence that the authors even considered standard human subject ethical issues, and that's something that really, really needs to be looked at carefully and with transparency by the academic community. We're witnessing a classic slippery slope scenario.
posted by mondo dentro at 1:58 PM on June 28, 2014 [3 favorites]
I am floored that this paper was published by the Proceedings of the National Academy of Sciences. This journal has a high impact factor, and frankly, anything that passes through its review process should be stellar. This paper troubles me on two grounds.
Previously on Metafilter
Same editor as this paper BTW.
(the links early on do not do a good job exposing the problems, but see this takedown)
posted by leopard at 1:58 PM on June 28, 2014
Previously on Metafilter
Same editor as this paper BTW.
(the links early on do not do a good job exposing the problems, but see this takedown)
posted by leopard at 1:58 PM on June 28, 2014
leopard, correct me if I'm wrong in my understanding here, but I think you're saying that:
a) society believes that academia should be bound by codes of conduct for human and behavioral research (e.g. consent), and that belief is codified in various laws and requirements.
b) society doesn't believe that businesses should be bound in the same way, and we know that's the case because there aren't similar laws limiting what businesses can do with regard to behavioral research.
Given those assumptions, yes, we should be totally fine about whatever Facebook is doing, because if we weren't then we should also be upset about every other instance of behavioral research/manipulation by other businesses, like restaurants and casinos.
The thing is, I think the reason why there aren't laws limiting behavioral research by businesses isn't because society doesn't care - I think it's because there would be a very organised and well-funded lobby opposing such laws. Clearly there are shades of grey here; I personally have less problem with Starbucks figuring out how to sell coffee than I do with casinos trying to wring every last dollar out of punters, because the potential harm to individuals and society is much much lower in the former case than in the latter.
Whether the research is published (and whether the fact of that publishing reflects poorly on the journal) is not the core issue here. If they're going to do the study, fine, it's better than it's published. It would be even better if consent were obtained, and I suspect that if you asked the general public, they would agree - but that's speculation on my part.
posted by adrianhon at 2:01 PM on June 28, 2014 [2 favorites]
a) society believes that academia should be bound by codes of conduct for human and behavioral research (e.g. consent), and that belief is codified in various laws and requirements.
b) society doesn't believe that businesses should be bound in the same way, and we know that's the case because there aren't similar laws limiting what businesses can do with regard to behavioral research.
Given those assumptions, yes, we should be totally fine about whatever Facebook is doing, because if we weren't then we should also be upset about every other instance of behavioral research/manipulation by other businesses, like restaurants and casinos.
The thing is, I think the reason why there aren't laws limiting behavioral research by businesses isn't because society doesn't care - I think it's because there would be a very organised and well-funded lobby opposing such laws. Clearly there are shades of grey here; I personally have less problem with Starbucks figuring out how to sell coffee than I do with casinos trying to wring every last dollar out of punters, because the potential harm to individuals and society is much much lower in the former case than in the latter.
Whether the research is published (and whether the fact of that publishing reflects poorly on the journal) is not the core issue here. If they're going to do the study, fine, it's better than it's published. It would be even better if consent were obtained, and I suspect that if you asked the general public, they would agree - but that's speculation on my part.
posted by adrianhon at 2:01 PM on June 28, 2014 [2 favorites]
Analysis by James Grimmelmann (via Mefi's own jessamyn):
This is bad, even for Facebook: Of course, it’s well know that Facebook, like other services, extensively manipulates what it shows users. (For recent discussions, see Zeynep Tufekci, Jonathan Zittrain, and Christian Sandvig). Advertisers and politicians have been in the emotional manipulation game for a long time. Why, then, should this study—carried out for nobler, scientific purposes—trigger a harsher response?posted by adrianhon at 2:07 PM on June 28, 2014 [13 favorites]
One reason is simply that some walks of life are regulated, and Facebook shouldn’t receive a free pass when it trespasses into them simply because it does the same things elsewhere. (...)
A stronger reason is that even when Facebook manipulates our News Feeds to sell us things, it is supposed—legally and ethically—to meet certain minimal standards. Anything on Facebook that is actually an ad is labelled as such (even if not always clearly.) This study failed even that test, and for a particularly unappealing research goal: We wanted to see if we could make you feel bad without you noticing. We succeeded. (...)
The real scandal, then, is what’s considered “ethical.” The argument that Facebook already advertises, personalizes, and manipulates is at heart a claim that our moral expectations for Facebook are already so debased that they can sink no lower. I beg to differ. This study is a scandal because it brought Facebook’s troubling practices into a realm—academia—where we still have standards of treating people with dignity and serving the common good.
Some persons are willing to make that trade.
Libertarian contract law VS
it violates Title 45 CFR Part 46 on the protection of human subjects
US Statutes. So many in fact the last 3 attempts to count them failed.
Good thing the US of A is a nation under the Rule Of Law eh?
(and thank you BTW for the link about the statute issue.)
posted by rough ashlar at 2:10 PM on June 28, 2014
Libertarian contract law VS
it violates Title 45 CFR Part 46 on the protection of human subjects
US Statutes. So many in fact the last 3 attempts to count them failed.
Good thing the US of A is a nation under the Rule Of Law eh?
(and thank you BTW for the link about the statute issue.)
posted by rough ashlar at 2:10 PM on June 28, 2014
So if I read it right, the product of this research will be that Facebook can, for example, make a targeted set of users feel unloved, and then sell ad placement for ice cream on their pages?
That's brilliant!
posted by George_Spiggott at 2:12 PM on June 28, 2014 [2 favorites]
That's brilliant!
posted by George_Spiggott at 2:12 PM on June 28, 2014 [2 favorites]
Grimmelmann adds:
posted by adrianhon at 2:12 PM on June 28, 2014 [18 favorites]
The study was presented to an IRB, which approved it “on the grounds that Facebook filters user news feeds all the time, per the agreement.”See the original email sent to the PNAS editor enquiring about the IRB, and the editor's full response.
posted by adrianhon at 2:12 PM on June 28, 2014 [18 favorites]
I was just about to post that, adrianhon. The plot thickens. Thanks.
posted by mondo dentro at 2:14 PM on June 28, 2014
posted by mondo dentro at 2:14 PM on June 28, 2014
adrianhon -- I think you have my position down pretty accurately. I think the other point I would make is that this is a conflict between utilitarian and deontological ethics. Is morality a matter of minimizing harm or following rules? These are two different things and I think a lot of the above discussion conflates the two or assumes that the two will necessary converge to the same results.
I think the direct harm here amounts to a reduction of a few thousand positive words in Facebook posts across several hundred thousand users during a week, by the way, plus whatever mental anguish accompanied such a dramatic shift.
posted by leopard at 2:16 PM on June 28, 2014
I think the direct harm here amounts to a reduction of a few thousand positive words in Facebook posts across several hundred thousand users during a week, by the way, plus whatever mental anguish accompanied such a dramatic shift.
posted by leopard at 2:16 PM on June 28, 2014
So if I read it right, the product of this research will be that Facebook can, for example, make a targeted set of users feel unloved, and then sell ad placement for ice cream on their pages?
That's pretty much how the small arms industry works in conjunction with right wing media outlets, with "feeling endangered" in the place of "feeling unloved".
posted by mondo dentro at 2:17 PM on June 28, 2014 [2 favorites]
That's pretty much how the small arms industry works in conjunction with right wing media outlets, with "feeling endangered" in the place of "feeling unloved".
posted by mondo dentro at 2:17 PM on June 28, 2014 [2 favorites]
Advertisers and politicians have been in the emotional manipulation game for a long time.
It used to be called propaganda but the next edition of the book with the title of Propaganda was re-titled Public Relations.
State of Mind is the title of a 2013 film touching on the topic if one wants to take a POV that such is a bad thing when the State does it.
posted by rough ashlar at 2:17 PM on June 28, 2014
It used to be called propaganda but the next edition of the book with the title of Propaganda was re-titled Public Relations.
State of Mind is the title of a 2013 film touching on the topic if one wants to take a POV that such is a bad thing when the State does it.
posted by rough ashlar at 2:17 PM on June 28, 2014
Even the editor of facebook's mood study thought it was creepy.
posted by waterlily at 2:19 PM on June 28, 2014 [1 favorite]
posted by waterlily at 2:19 PM on June 28, 2014 [1 favorite]
Goddamn I wish I was a civil attorney! I'd be sending out class action notices to parties potentially impacted already. If even one of the roughly 600,000 unwitting participants in this experiment was bipolar and went into a depressive episode around the time FB was conducting this research, I would make them as rich as Zuckerberg. You don't get to fuck people's real lives and families over and then intellectualize the harm away as "interesting" enough to justify.
posted by saulgoodman at 2:28 PM on June 28, 2014 [11 favorites]
posted by saulgoodman at 2:28 PM on June 28, 2014 [11 favorites]
I can't wait until that journal finally accepts my paper on the past year of Sad MetaFilter. You guys have no idea how much more fun we're having on the mod-only Happy MetaFilter. So many unicorn double-posts there!
Anyway, I've said too much...
posted by mathowie at 2:28 PM on June 28, 2014 [37 favorites]
Anyway, I've said too much...
posted by mathowie at 2:28 PM on June 28, 2014 [37 favorites]
Well this new information about PNAS considering the ethics of the study is very useful. Just for completeness, I typed in the text of the reply letter.
Now we can debate whether an IRB should have approved the story or PNAS should have accepted the IRB's conclusion, but at least the question was apparently considered.
posted by Nelson at 2:31 PM on June 28, 2014 [3 favorites]
Thank you for your inquiry. I was concerned about this ethical issue as well, but the authors indicated their university IRB had approved the study, on the grounds that Facebook filters user news feeds all the time, per the agreement. Thus, it fits everyday experiences for users, even if they do not often consider the nature of Facebook's systematic interventions.Both the inqury and response are annoyingly anonymous imgur images, but I'm about 90% confident they come from @ZLeeily. She wrote the inquiry yesterday, posted the reply this morning.
Having chaired an IRB for a decade and having written on human subjects research ethics, I judged that PNAS should not second-guess the relevant IRB.
Susan T. Fiske
Psychology & Public Affairs
Princeton University
Now we can debate whether an IRB should have approved the story or PNAS should have accepted the IRB's conclusion, but at least the question was apparently considered.
posted by Nelson at 2:31 PM on June 28, 2014 [3 favorites]
AHA! So now we know the truth about why MeFi is blue.
posted by Westringia F. at 2:31 PM on June 28, 2014
posted by Westringia F. at 2:31 PM on June 28, 2014
... think the other point I would make is that this is a conflict between utilitarian and deontological ethics. Is morality a matter of minimizing harm or following rules?
I think this tidy distinction your using misses the utilitarian function of ethical rules.
When I was a kid, professionals (doctors, dentists, lawyers) didn't advertise or engage in additional commercial practices outside of charging for their professional services. It was considered unethical. Likewise, drug companies couldn't advertise. Pharmacies that just sold medicines were called "ethical pharmacies".
Since Reagan, this has all changed. Now professionals advertise and many try to sell you stuff when you walk into their office. What was lost? From a consumer perspective, what was lost was the ability to trust one's professional consultant. Knowing that a drug company just wants to sell ADHD medication as much as they possibly can, that they advertise on TV, and that they send attractive salespeople to doctors' offices to push the product means that prescriptions can never be wholly trusted. And so on, from dentists selling tooth whiteners (that never used to happen) to climate scientists funded by the Koch brothers.
The erosion of professional detachment from financial concerns (since "everything is like a business"), and the view that it is somehow "utilitarian" to minimize the distance between professional and corporate ethical comportment has had, and will have, the practical effect of greatly devaluing the quality of the information we get from professional sources. That is a huge societal loss from a utilitarian perspective.
posted by mondo dentro at 2:33 PM on June 28, 2014 [19 favorites]
I think this tidy distinction your using misses the utilitarian function of ethical rules.
When I was a kid, professionals (doctors, dentists, lawyers) didn't advertise or engage in additional commercial practices outside of charging for their professional services. It was considered unethical. Likewise, drug companies couldn't advertise. Pharmacies that just sold medicines were called "ethical pharmacies".
Since Reagan, this has all changed. Now professionals advertise and many try to sell you stuff when you walk into their office. What was lost? From a consumer perspective, what was lost was the ability to trust one's professional consultant. Knowing that a drug company just wants to sell ADHD medication as much as they possibly can, that they advertise on TV, and that they send attractive salespeople to doctors' offices to push the product means that prescriptions can never be wholly trusted. And so on, from dentists selling tooth whiteners (that never used to happen) to climate scientists funded by the Koch brothers.
The erosion of professional detachment from financial concerns (since "everything is like a business"), and the view that it is somehow "utilitarian" to minimize the distance between professional and corporate ethical comportment has had, and will have, the practical effect of greatly devaluing the quality of the information we get from professional sources. That is a huge societal loss from a utilitarian perspective.
posted by mondo dentro at 2:33 PM on June 28, 2014 [19 favorites]
Goddamn I wish I was a civil attorney!
Depending on your State you may not have to be a lawyer to file a Qui Tam action. Some States like Texas state the Grand Jury can investigate whatever matters come to their attention by whatever means. Others like Wisconsin 968.01 lets anyone go to a Judge and have 'em sign a criminal complaint.
You don't HAVE to be "learned council" to take some form of action.
There is always sending a notice to the Hague for a violation of the Universal Deceleration of Human Rights.
posted by rough ashlar at 2:35 PM on June 28, 2014 [1 favorite]
Depending on your State you may not have to be a lawyer to file a Qui Tam action. Some States like Texas state the Grand Jury can investigate whatever matters come to their attention by whatever means. Others like Wisconsin 968.01 lets anyone go to a Judge and have 'em sign a criminal complaint.
You don't HAVE to be "learned council" to take some form of action.
There is always sending a notice to the Hague for a violation of the Universal Deceleration of Human Rights.
posted by rough ashlar at 2:35 PM on June 28, 2014 [1 favorite]
leopard, I agree that this comes down to a question of what kind of ethics we subscribe to. I am not an expert on ethics but I favour a virtue-based system rather than utilitarian or deontological system, so I would rather not have Facebook trying to make me happier without my knowledge or consent - I'd prefer to do the hard work of that myself.
As for the direct harm, my issue is with the scale, but I'd be interested to see before-and-after versions of people's News Feeds to understand that properly. I'm also concerned about the future implications of the research.
On another note, as noted by waterlily above, The Atlantic got in touch with the PNAS editor, Susan T. Fiske, who said (in various excerpts):
As for the direct harm, my issue is with the scale, but I'd be interested to see before-and-after versions of people's News Feeds to understand that properly. I'm also concerned about the future implications of the research.
On another note, as noted by waterlily above, The Atlantic got in touch with the PNAS editor, Susan T. Fiske, who said (in various excerpts):
I was concerned until I queried the authors and they said their local institutional review board had approved it—and apparently on the grounds that Facebook apparently manipulates people's News Feeds all the time... I understand why people have concerns. I think their beef is with Facebook, really, not the research.posted by adrianhon at 2:40 PM on June 28, 2014 [5 favorites]
People are supposed to be, under most circumstances, told that they're going to be participants in research and then agree to it and have the option not to agree to it without penalty.
A lot of the regulation of research ethics hinges on government supported research, and of course Facebook's research is not government supported, so they're not obligated by any laws or regulations to abide by the standards. But I have to say that many universities and research institutions and even for-profit companies use the Common Rule as a guideline anyway. It's voluntary. You could imagine if you were a drug company, you'd want to be able to say you'd done the research ethically because the backlash would be just huge otherwise.
But if you find money on the street and it makes you feel cheerful, the idea that someone placed it there, it's not as personal. I think part of what's disturbing for some people about this particular research is you think of your News Feed as something personal. I had not seen before, personally, something in which the researchers had the cooperation of Facebook to manipulate people... Who knows what other research they're doing.
I don't think the originality of the research should be lost. So, I think it's an open ethical question. It's ethically okay from the regulations perspective, but ethics are kind of social decisions. There's not an absolute answer. And so the level of outrage that appears to be happening suggests that maybe it shouldn't have been done... I'm still thinking about it and I'm a little creeped out, too.
(I'm appreciating the quick updates and links to other commentary here! Worth noting that quite a few people are watching this thread, e.g. MeFi's Own Grimmelmann and Zittrain)
posted by adrianhon at 2:44 PM on June 28, 2014
posted by adrianhon at 2:44 PM on June 28, 2014
Since we're comparing the reactions in our feeds: I study complex networks. In my case, the application area is biological, but I intersect with other network scientists who do social networks research using social media data (people like David Lazer, Sinan Aral, &c). The social networks folks in my [twitter] feed have been almost completely silent about this so far.
However, it'll be interesting to hear what that community has to say about it, and more generally about ensuring ethical best-practices in social media studies, at the upcoming CodeCon Conference on Digital Experimentation at MIT this October. From the "About" section:
posted by Westringia F. at 2:56 PM on June 28, 2014 [5 favorites]
However, it'll be interesting to hear what that community has to say about it, and more generally about ensuring ethical best-practices in social media studies, at the upcoming CodeCon Conference on Digital Experimentation at MIT this October. From the "About" section:
The ability to rapidly deploy micro-level randomized experiments at population scale is, in our view, one of the most significant innovations in modern social science. As more and more social interactions, behaviors, decisions, opinions and transactions are digitized and mediated by online platforms, we can quickly answer nuanced causal questions about the role of social behavior in population-level outcomes such as health, voting, political mobilization, consumer demand, information sharing, product rating and opinion aggregation. When appropriately theorized and rigorously applied, randomized experiments are the gold standard of causal inference and a cornerstone of effective policy. But the scale and complexity of these experiments also create scientific and statistical challenges for design and inference. The purpose of the Conference on Digital Experimentation at MIT (CODE) is to bring together leading researchers conducting and analyzing large scale randomized experiments in digitally mediated social and economic environments, in various scientific disciplines including economics, computer science and sociology, in order to lay the foundation for ongoing relationships and to build a lasting multidisciplinary research community.
posted by Westringia F. at 2:56 PM on June 28, 2014 [5 favorites]
From time to time I voluntarily choose to participate in neuropsychological research that touches on mood and affect. I do this because I live with a chronic health condition that is associated with attention difficulties, mood disturbances and mood disorders (epilepsy). I do this to help make life better for other, future people with epilepsy and/or mood disturbances or ADHD. The pools of eligible participants for studies on these topics are significantly smaller than the number of people who use Facebook.
I haven't voluntarily participated in research recently, for a few reasons, so this is theoretical, but it strikes me that one isn't always allowed to participate in multiple studies at once. I am sure there are people who don't abide by that, but when I've been in studies, I tried not to fuck shit up for researchers if I could help it, because I know that fucking with the data from one participant in these studies is much more likely to fuck with the entire experiment, since there aren't 600,000+ participants involved. So, had I known that Facebook might be actively altering my newsfeed to shape my mood, or been given some form of actually informed consent about participating in mood-related research that would involve direct manipulation of my social media experience, I would have opted out of allowing research, or deactivated my account...whatever. Because I'd rather save my eligibility for research that needs people with traits less common than "has a Facebook account."
Basically, I think these guys are short-sighted assholes, and not just to the involuntary participants. Their study may have fucked with other researchers' work, work that may not have accidentally-on-purpose-oopsie-my-gosh! skipped the whole informed consent part, and that also sucks.
posted by Uniformitarianism Now! at 3:30 PM on June 28, 2014 [22 favorites]
I haven't voluntarily participated in research recently, for a few reasons, so this is theoretical, but it strikes me that one isn't always allowed to participate in multiple studies at once. I am sure there are people who don't abide by that, but when I've been in studies, I tried not to fuck shit up for researchers if I could help it, because I know that fucking with the data from one participant in these studies is much more likely to fuck with the entire experiment, since there aren't 600,000+ participants involved. So, had I known that Facebook might be actively altering my newsfeed to shape my mood, or been given some form of actually informed consent about participating in mood-related research that would involve direct manipulation of my social media experience, I would have opted out of allowing research, or deactivated my account...whatever. Because I'd rather save my eligibility for research that needs people with traits less common than "has a Facebook account."
Basically, I think these guys are short-sighted assholes, and not just to the involuntary participants. Their study may have fucked with other researchers' work, work that may not have accidentally-on-purpose-oopsie-my-gosh! skipped the whole informed consent part, and that also sucks.
posted by Uniformitarianism Now! at 3:30 PM on June 28, 2014 [22 favorites]
I feel like there should be a standard that any research conducted on people should be publicly accessible (if it can't be banned altogether) so that people know what advertisers and businesses are using against them. I have a hard time understanding why it's legal to conduct marketing research in secret to manipulate people without their knowledge.
posted by xarnop at 3:31 PM on June 28, 2014 [1 favorite]
posted by xarnop at 3:31 PM on June 28, 2014 [1 favorite]
Wow, you spend 5 hours on a plane and Facebook blows up the internet. Again.
Speaking from my own perspective as another social scientist who also studies social media (though not Facebook), and who also travels in similar academic circles as some of the study's authors, I think this study is very ethically problematic. I am dismayed that the academics showed such disregard for their subjects, dismayed that at IRB apparently approved this, and dismayed by the PNAS editor's subsequent statement. The ethical lapses are magnified by the dubious value, when contrasted by the possible harm of manipulating peoples' emotions.
posted by DiscourseMarker at 3:39 PM on June 28, 2014 [17 favorites]
Speaking from my own perspective as another social scientist who also studies social media (though not Facebook), and who also travels in similar academic circles as some of the study's authors, I think this study is very ethically problematic. I am dismayed that the academics showed such disregard for their subjects, dismayed that at IRB apparently approved this, and dismayed by the PNAS editor's subsequent statement. The ethical lapses are magnified by the dubious value, when contrasted by the possible harm of manipulating peoples' emotions.
posted by DiscourseMarker at 3:39 PM on June 28, 2014 [17 favorites]
Someone must have flipped a lever at Facebook, because now there are 35k Facebook shares of the AV Club article.
posted by Elementary Penguin at 3:41 PM on June 28, 2014
posted by Elementary Penguin at 3:41 PM on June 28, 2014
I've kept my Facebook account open so I could read updates from the friends and family members who insist on using Facebook. (I stopped posting any updates of my own some time ago.)
It was frustrating to use, because I wanted to see ALL updates from the people I was Facebook friends with, and Facebook insisted on filtering my feed. But I perservered.
Today I permanently deleted my account. I'll call my friends and family members, or send them email, or make a point of getting together in person for those who live nearby. I'll probably lose touch with some people I'm less close to. But I just can't stay on Facebook any longer.
posted by jeri at 3:43 PM on June 28, 2014 [11 favorites]
It was frustrating to use, because I wanted to see ALL updates from the people I was Facebook friends with, and Facebook insisted on filtering my feed. But I perservered.
Today I permanently deleted my account. I'll call my friends and family members, or send them email, or make a point of getting together in person for those who live nearby. I'll probably lose touch with some people I'm less close to. But I just can't stay on Facebook any longer.
posted by jeri at 3:43 PM on June 28, 2014 [11 favorites]
Christian Sandvig made an interesting blog post a couple of days ago (i.e. before this study was published) related to this subject, about Corrupt Personalization:
Common-sense reasoning about algorithms and culture tells us that the purveyors of personalized content have the same interests we do. That is, if Netflix started recommending only movies we hate or Google started returning only useless search results we would stop using them. However: Common sense is wrong in this case. Our interests are often not the same as the providers of these selection algorithms.posted by adrianhon at 3:58 PM on June 28, 2014 [2 favorites]
...Facebook’s business model is to produce attention for advertisers, not to help you — silly rabbit. So they must have felt that using your reputation to produce more ad traffic from your friends was worth the risk of irritating you. Or perhaps they thought that the practice could be successfully hidden from users — that strategy has mostly worked! In sum this is a personalization scheme that does not serve your goals, it serves Facebook’s goals at your expense. (...)
If I use the dominant forms of communication online today (Facebook, Google, Twitter, YouTube, etc.) I can expect content customized for others to use my name and my words without my consent, in ways I wouldn’t approve of. Content “personalized” for me includes material I don’t want, and obscures material that I do want. And it does so in a way that I may not be aware of.
This isn’t an abstract problem like a long-term threat to democracy, it’s more like a mugging — or at least a confidence game or a fraud. It’s violence being done to you right now, under your nose. Just click “like.”
And finally, Sandvig highlights a "fab and ethically exemplary Facebook emotional contagion study design" in PLOS ONE: Detecting Emotional Contagion in Massive Social Networks, published in March:
posted by adrianhon at 4:01 PM on June 28, 2014 [5 favorites]
Happiness and other emotions spread between people in direct contact, but it is unclear whether massive online social networks also contribute to this spread. Here, we elaborate a novel method for measuring the contagion of emotional expression. With data from millions of Facebook users, we show that rainfall directly influences the emotional content of their status messages, and it also affects the status messages of friends in other cities who are not experiencing rainfall. For every one person affected directly, rainfall alters the emotional expression of about one to two other people, suggesting that online social networks may magnify the intensity of global emotional synchrony....and now I'm going to bed.
posted by adrianhon at 4:01 PM on June 28, 2014 [5 favorites]
> Today I permanently deleted my account.
Yeah, this. I deleted my account 3+ years ago, and the correctness of that decision is reaffirmed almost daily.
I'm sick of all the handwringing about needing Facebook to keep in touch with relatives or a caving club or that friend you made while living overseas your junior year of college.
Because you know what? No you don't.
Facebook is a convenience, and participating in something as morally bankrupt as Facebook for the sake of convenience isn't really defensible with just a hand wave and a comment about getting the latest updates from your Warhammer club.
Just opt out. It's actually pretty easy.
posted by toofuture at 4:05 PM on June 28, 2014 [7 favorites]
Yeah, this. I deleted my account 3+ years ago, and the correctness of that decision is reaffirmed almost daily.
I'm sick of all the handwringing about needing Facebook to keep in touch with relatives or a caving club or that friend you made while living overseas your junior year of college.
Because you know what? No you don't.
Facebook is a convenience, and participating in something as morally bankrupt as Facebook for the sake of convenience isn't really defensible with just a hand wave and a comment about getting the latest updates from your Warhammer club.
Just opt out. It's actually pretty easy.
posted by toofuture at 4:05 PM on June 28, 2014 [7 favorites]
Hmmm. January 2012 is about the time when my disgust with Facebook escalated to the point that I walked away and never looked back, not even to delete my account. Correlation does not imply causation, and all that...
posted by HillbillyInBC at 4:10 PM on June 28, 2014
posted by HillbillyInBC at 4:10 PM on June 28, 2014
I don't see much validity or insight in editor Fisk's email reply. Apparently, consent can be waived if the experimental sideeffects are "judged" to be within the noise range of the population. What precedent is there for this strange concept? I don't even understand the logic of this argument. Death and sickness fit "everyday experiences", but no medical researcher gets the power to interfere with which of their subjects gets to survive in the name of science, no matter how inconsequential their lives may seem to you.
Secondly, just because Facebook gets to do something a certain way as a business, doesn't mean scientists get a free pass on ethical methodology.
Third, Fisk uses the idea of having worked in an IRB for X years, to assert that this IRB shouldn't be second-guessed. How authoritarian is that? Uncritical.
These intellectual hacks knowingly ran computer programs that would leave some individuals psychologically better off than other individuals, without their consent. This is corrupt thinking, protected by shortsighted rationalization.
posted by polymodus at 4:22 PM on June 28, 2014 [10 favorites]
Secondly, just because Facebook gets to do something a certain way as a business, doesn't mean scientists get a free pass on ethical methodology.
Third, Fisk uses the idea of having worked in an IRB for X years, to assert that this IRB shouldn't be second-guessed. How authoritarian is that? Uncritical.
These intellectual hacks knowingly ran computer programs that would leave some individuals psychologically better off than other individuals, without their consent. This is corrupt thinking, protected by shortsighted rationalization.
posted by polymodus at 4:22 PM on June 28, 2014 [10 favorites]
> I know two of those authors quite well. The way that you're framing this as Facebook being evil is really a slap in the face to respected academic researchers.
Oh, OK, got it. The researchers are the ones being harmed here. Thanks for the clarification.
Your comment is so absurdly dismissive of the people that were intentionally harmed that it would be humorous if it wasn't so offensively out of touch.
posted by toofuture at 4:25 PM on June 28, 2014 [16 favorites]
Oh, OK, got it. The researchers are the ones being harmed here. Thanks for the clarification.
Your comment is so absurdly dismissive of the people that were intentionally harmed that it would be humorous if it wasn't so offensively out of touch.
posted by toofuture at 4:25 PM on June 28, 2014 [16 favorites]
I've had cursory interactions with IRBs (and have close friends who've served on them or been reviewed by them) and I really don't get the purpose of informed consent in social science research. In medical research, drugs have objective effects. In psychology for example, telling someone you're looking at how they think has to have an effect on how they think. It is impossible to be an observer if you've essentially made yourself a participant by announcing your presence (greetings isolated villagers, please continue your lives as if two tall white guys with clipboards were not currently standing in your town square gaping).
posted by Octaviuz at 4:33 PM on June 28, 2014
posted by Octaviuz at 4:33 PM on June 28, 2014
I'm not getting off Facebook for awhile yet because I'm at the initial stages of a social media PR campaign and can't afford to start secondguessing the tools I'm using now (though hopefully other alternatives will eventually be available), but I'm going to use it now without any qualms or pretenses about decorum or the social/community aspects of its use now, knowing that it's already corrupted well beyond any harm my self-promotional shilling is likely to do. Once I'm done using it, I'll throw it away.
posted by saulgoodman at 4:35 PM on June 28, 2014
posted by saulgoodman at 4:35 PM on June 28, 2014
I am amused? by all the references to Milgram and his obedience experiment, since of course the other thing for which Milgram is famous is the founding study of social network research: his small-world (six degrees of separation) experiment.
posted by Westringia F. at 4:41 PM on June 28, 2014
posted by Westringia F. at 4:41 PM on June 28, 2014
Octaviuz, as has been pointed out throughout the thread there is a rather substantial difference between merely observing and actively manipulating subjects. The latter is what happened here and it should most certainly require informed consent.
posted by Hairy Lobster at 5:14 PM on June 28, 2014
posted by Hairy Lobster at 5:14 PM on June 28, 2014
saulgoodman: "If there is not a massive class action lawsuit against Facebook on behalf of its mentally ill users, it will be a shame."
That seems interesting, because just in the last couple of years I've treated two different people involuntarily committed to psychiatric hospitals for psychosis and whose main focus of delusion was their compulsive, relentless and obsessional interaction with Facebook. One of them, when they came into the hospital, did not eat for several days and could only say "I am Facebook" repeatedly.
I haven't read this paper, but from what I've heard about it, it goes beyond simple observation of public social media data and involved randomized testing of unwitting subjects without explicit informed consent. That seems a clear violation of the Helsinki Declaration, should not have passed any reputable or conscious IRB, and demonstrates clear defects in ethical judgement in the researchers and the publication journal.
posted by meehawl at 5:39 PM on June 28, 2014 [2 favorites]
That seems interesting, because just in the last couple of years I've treated two different people involuntarily committed to psychiatric hospitals for psychosis and whose main focus of delusion was their compulsive, relentless and obsessional interaction with Facebook. One of them, when they came into the hospital, did not eat for several days and could only say "I am Facebook" repeatedly.
I haven't read this paper, but from what I've heard about it, it goes beyond simple observation of public social media data and involved randomized testing of unwitting subjects without explicit informed consent. That seems a clear violation of the Helsinki Declaration, should not have passed any reputable or conscious IRB, and demonstrates clear defects in ethical judgement in the researchers and the publication journal.
posted by meehawl at 5:39 PM on June 28, 2014 [2 favorites]
I know two of those authors quite well.
Can you kick both in their reproductive organs for me?
The way that you're framing this as Facebook being evil is really a slap in the face to respected academic researchers.
Yeah, boo hoo hoo for the researches. I hope they lose their fucking jobs. I'd love to read the IRB release on this one. I also think you have a different definition for the word respected than most people. I think they are pieces of shit.
posted by cjorgensen at 5:52 PM on June 28, 2014 [15 favorites]
Can you kick both in their reproductive organs for me?
The way that you're framing this as Facebook being evil is really a slap in the face to respected academic researchers.
Yeah, boo hoo hoo for the researches. I hope they lose their fucking jobs. I'd love to read the IRB release on this one. I also think you have a different definition for the word respected than most people. I think they are pieces of shit.
posted by cjorgensen at 5:52 PM on June 28, 2014 [15 favorites]
I'm sick of all the handwringing about needing Facebook to keep in touch with relatives or a caving club or that friend you made while living overseas your junior year of college.
Because you know what? No you don't.
It's really not quite that easy. I'm a comittee member & preserve manager for a 501c(3) that owns several cave preserves, and they expect me to post informational updates concerning my little preserve via Facebook as part of the duties I volunteered for. It's an integral part of the public-facing cave preservation advocacy that we do, so I'm pretty well stuck with that. I intend to keep that up, but am scaling back everything else.
posted by Devils Rancher at 5:59 PM on June 28, 2014 [2 favorites]
Because you know what? No you don't.
It's really not quite that easy. I'm a comittee member & preserve manager for a 501c(3) that owns several cave preserves, and they expect me to post informational updates concerning my little preserve via Facebook as part of the duties I volunteered for. It's an integral part of the public-facing cave preservation advocacy that we do, so I'm pretty well stuck with that. I intend to keep that up, but am scaling back everything else.
posted by Devils Rancher at 5:59 PM on June 28, 2014 [2 favorites]
Putting aside the issue of informed consent for a second (and I appreciate that this is the main issue), I find the pearl clutching about how this study "harmed" people laughable. Here is a Metafilter FPP suggesting that seeing happiness on Facebook actually makes us sad. So maybe these unethical researchers who made it less likely that positively worded posts showed up in some people's news feeds were actually making the world a better place. Certainly the fact that the treatment group used 1 fewer positive word per 600 total words written is not exactly evidence of harm.
posted by leopard at 6:02 PM on June 28, 2014 [1 favorite]
posted by leopard at 6:02 PM on June 28, 2014 [1 favorite]
In that case, I am choosing to see shit that makes me unhappy. My bad.
In this case some douchebag decided I needed to see negative posts. Fuck that guy. Fuck Facebook for showing me posts in anything other than chronological order.
posted by cjorgensen at 6:07 PM on June 28, 2014 [6 favorites]
In this case some douchebag decided I needed to see negative posts. Fuck that guy. Fuck Facebook for showing me posts in anything other than chronological order.
posted by cjorgensen at 6:07 PM on June 28, 2014 [6 favorites]
There is always sending a notice to the Hague for a violation of the Universal Deceleration of Human Rights.
I'm sorry, that's too good a typo/autocorrect not to point out. This kind of manipulation is fascinating to me, however I'm probably the kind of person that would stare in fascination at an incoming missile. I actually welcome the news as a reminder that business ethics is at best a nonsense phrase.
posted by Divine_Wino at 6:08 PM on June 28, 2014 [3 favorites]
I'm sorry, that's too good a typo/autocorrect not to point out. This kind of manipulation is fascinating to me, however I'm probably the kind of person that would stare in fascination at an incoming missile. I actually welcome the news as a reminder that business ethics is at best a nonsense phrase.
posted by Divine_Wino at 6:08 PM on June 28, 2014 [3 favorites]
If there is not a massive class action lawsuit against Facebook on behalf of its mentally ill users, it will be a shame.
What about sane people that just had shitty days?
posted by cjorgensen at 6:10 PM on June 28, 2014 [1 favorite]
What about sane people that just had shitty days?
posted by cjorgensen at 6:10 PM on June 28, 2014 [1 favorite]
It's astonishing that Facebook will spend millions on advertising and PR, and then pull a bone-headed move like this.
And they do it over, and over, and over.
I dumped FB a few years ago so I really only keep up with the company when it's in the news (or here) and it is never positive.
posted by Room 641-A at 6:34 PM on June 28, 2014 [3 favorites]
And they do it over, and over, and over.
I dumped FB a few years ago so I really only keep up with the company when it's in the news (or here) and it is never positive.
ANNOUNCER/FPP: Blah blah blah facebook --It's the only company I can think of that makes it's customers dread new features. Maybe it's just confirmation bias, but I can't think of anything facebook has announced hat has made me reconsider reactivating my account.
ME: Gawd, now what?
posted by Room 641-A at 6:34 PM on June 28, 2014 [3 favorites]
Well, cjorgensen, I'm actually mostly thinking of relatively sane people who've struggled with bipolar or just plain old garden variety clinical depression. People in that predicament (which I've been in myself in the past, though I'm doing fine these days) have a hard enough time keeping their heads above water without some douche bag social media exec or researcher standing safely on shore and deliberately trying to push them back under again with their boot just to see if they can.
posted by saulgoodman at 6:56 PM on June 28, 2014 [12 favorites]
posted by saulgoodman at 6:56 PM on June 28, 2014 [12 favorites]
> It's the only company I can think of that makes it's customers dread new features.
Let me guess: you don't use twitter.
> saulgoodman, I got that. I think you're correct. This is damaging to people that are already fragile. I was just trying to point out your scope might be limited. Sure, I might have only been bummed out and kicked a puppy or two, but I was still damaged (presuming I am sane and a test subject). I'm just saying, you don't have to have had demonstrable damage to have been harmed.
posted by cjorgensen at 7:08 PM on June 28, 2014 [1 favorite]
Let me guess: you don't use twitter.
> saulgoodman, I got that. I think you're correct. This is damaging to people that are already fragile. I was just trying to point out your scope might be limited. Sure, I might have only been bummed out and kicked a puppy or two, but I was still damaged (presuming I am sane and a test subject). I'm just saying, you don't have to have had demonstrable damage to have been harmed.
posted by cjorgensen at 7:08 PM on June 28, 2014 [1 favorite]
Ah, got you. That's true. A bigger class would teach more of a lesson, too.
posted by saulgoodman at 7:12 PM on June 28, 2014
posted by saulgoodman at 7:12 PM on June 28, 2014
How do you all feel about the Pepsi Challenge? Is it okay that other businesses manipulate people in order to increase their profits?
I am no social scientist, but I am a veteran of the Great Cola War of '75. How is this anything like what is happening here?
So, mefites, my smart computer literate mefites..... who is going to make a better internet? I sort of have this dream of there being a metafilter internet search engine.... a mefitebook.... facefilter....... I guess the whole issue is making the whole thing for cost?
I was going to lament the loss of diaspora* but it looks like it's back. I failed to gain any momentum last time, but it looks interesting:
Ding! Not actively anymore, no.
posted by Room 641-A at 8:20 PM on June 28, 2014 [3 favorites]
I am no social scientist, but I am a veteran of the Great Cola War of '75. How is this anything like what is happening here?
So, mefites, my smart computer literate mefites..... who is going to make a better internet? I sort of have this dream of there being a metafilter internet search engine.... a mefitebook.... facefilter....... I guess the whole issue is making the whole thing for cost?
I was going to lament the loss of diaspora* but it looks like it's back. I failed to gain any momentum last time, but it looks interesting:
Many networks use your data to make money by analysing your interactions and using this information to advertise things to you. diaspora* doesn’t use your data for any purpose other than allowing you to connect and share with others.On preview: Let me guess: you don't use twitter.
Host it yourself
Choose where your data are stored by choosing a pod you’re happy with. If you want to be really secure, you can set up and host your own pod on servers you control, so no one can get at your personal data.
Decentralization
Instead of everyone’s data being contained on huge central servers owned by a large organization, local servers (“pods”) can be set up anywhere in the world. You choose which pod to register with - perhaps your local pod - and seamlessly connect with the diaspora* community worldwide.
Privacy
In diaspora* you own your data. You do not sign over any rights to a corporation or other interest who could use it. With diaspora*, your friends, your habits, and your content is your business ... not ours!
Ding! Not actively anymore, no.
posted by Room 641-A at 8:20 PM on June 28, 2014 [3 favorites]
toofuture: “Just opt out. It's actually pretty easy.”It really isn't. As I already said, like Devils Rancher, I have obligations — professional obligations — to people. Your blithe dismissal of these obligations does not relieve me of them. So how about you cut those of us who are put in a difficult position by these revelations some slack?
posted by ob1quixote at 8:26 PM on June 28, 2014 [1 favorite]
It's not really going to help those of us who are currently stuck supporting organizations with Facebook as a major part of their public-facing web presence, but for those just looking for a place to host their local gaming club or what have you, have a look at Glassboard.
I've been using it for a while now to keep up with certain friends privately. There are iPhone and Android apps that do push notifications, you can share pictures, there are favorites, and the privacy terms are favorable. It's a reasonable alternative to Facebook for certain use cases.
posted by ob1quixote at 8:38 PM on June 28, 2014 [3 favorites]
I've been using it for a while now to keep up with certain friends privately. There are iPhone and Android apps that do push notifications, you can share pictures, there are favorites, and the privacy terms are favorable. It's a reasonable alternative to Facebook for certain use cases.
posted by ob1quixote at 8:38 PM on June 28, 2014 [3 favorites]
Something smart from a social scientist: In defense of Facebook.
There is little doubt that the present backlash will do absolutely nothing to deter Facebook from actually conducting controlled experiments on its users, because A/B testing is a central component of pretty much every major web company’s business strategy at this point–and frankly, Facebook would be crazy not to try to empirically determine how to improve user experience. What criticism of the Kramer et al article will almost certainly do is decrease the scientific community’s access to, and interaction with, one of the largest and richest sources of data on human behavior in existence. You can certainly take a dim view of Facebook as a company if you like, and you’re free to critique the way they do business to your heart’s content. But haranguing Facebook and other companies like it for publicly disclosing scientifically interesting results of experiments that it is already constantly conducting anyway–and that are directly responsible for many of the positive aspects of the user experience–is not likely to accomplish anything useful. If anything, it’ll only ensure that, going forward, all of Facebook’s societally relevant experimental research is done in the dark, where nobody outside the company can ever find out–or complain–about it.posted by myeviltwin at 9:31 PM on June 28, 2014 [6 favorites]
The Nazis did all those terrible experiments and we can't change that, but surely we shouldn't discard all that the valuable data they collected?
Yes. Yes we should discard it.
posted by straight at 9:39 PM on June 28, 2014
Yes. Yes we should discard it.
posted by straight at 9:39 PM on June 28, 2014
We'll see, myeviltwin. There are other alternatives than those framed in that purple prose you blockquoted.
posted by saulgoodman at 9:40 PM on June 28, 2014 [4 favorites]
posted by saulgoodman at 9:40 PM on June 28, 2014 [4 favorites]
There is a metric fuck-ton of excluded middle in Dr. Yarkoni's response.
posted by Uniformitarianism Now! at 9:41 PM on June 28, 2014 [4 favorites]
posted by Uniformitarianism Now! at 9:41 PM on June 28, 2014 [4 favorites]
But haranguing Facebook and other companies like it for publicly disclosing scientifically interesting results of experiments that it is already constantly conducting anyway
Facebook was already attempting to make its userbase unhappy by systematically reducing access to positive entries?
posted by ROU_Xenophobe at 9:44 PM on June 28, 2014 [2 favorites]
Facebook was already attempting to make its userbase unhappy by systematically reducing access to positive entries?
posted by ROU_Xenophobe at 9:44 PM on June 28, 2014 [2 favorites]
There is a metric fuck-ton of excluded middle in Dr. Yarkoni's response.
Yup. They could have, for example, conducted a small pilot study with properly recruited subjects and clear informed consent, and demonstrated that the subjects found their intervention innocuous. They could have attempted to debrief the subjects and give them an opportunity to object ex post. They could even have limited their intervention to adding more positive items to subjects' news feeds.
posted by ROU_Xenophobe at 9:52 PM on June 28, 2014 [14 favorites]
Yup. They could have, for example, conducted a small pilot study with properly recruited subjects and clear informed consent, and demonstrated that the subjects found their intervention innocuous. They could have attempted to debrief the subjects and give them an opportunity to object ex post. They could even have limited their intervention to adding more positive items to subjects' news feeds.
posted by ROU_Xenophobe at 9:52 PM on June 28, 2014 [14 favorites]
I reluctantly rejoined FB a month ago because its apparently impossible for my colleagues to organise a social event without Facebook. I'd rather be a social outcast than put up with this shit.
Someone (I'm on my phone or I'd check who) compared this to Starbucks cutting sugar from your coffee and observing that you dislike the coffee.
No. Its more like Starbucks randomly switching sweetener for sugar and soy for milk without having any clue how many of their customers have diabetes or lactose intolerance. But its okay, because none of their customers left feedback saying that they'd gone in to a diabetic coma. And anyway, they willingly walked into the Starbucks by themselves, and there aren't any rules about Starbucks filling their Splenda sachets with glucose. And since they didn't do it in the Starbucks on the hospital concourse, they didn't need ethics approval!
posted by sodium lights the horizon at 12:23 AM on June 29, 2014 [16 favorites]
Someone (I'm on my phone or I'd check who) compared this to Starbucks cutting sugar from your coffee and observing that you dislike the coffee.
No. Its more like Starbucks randomly switching sweetener for sugar and soy for milk without having any clue how many of their customers have diabetes or lactose intolerance. But its okay, because none of their customers left feedback saying that they'd gone in to a diabetic coma. And anyway, they willingly walked into the Starbucks by themselves, and there aren't any rules about Starbucks filling their Splenda sachets with glucose. And since they didn't do it in the Starbucks on the hospital concourse, they didn't need ethics approval!
posted by sodium lights the horizon at 12:23 AM on June 29, 2014 [16 favorites]
it doesn't work
This. Not so much a Facebook problem as Amazon, but similar in the degree of snoopiness, the creepy tendency to start getting ads everywhere from anything I've recently looked at in any online store. Helllloooo, marketers, it's a little late to show me ads for lawnmowers when the Clue you have snarfed up is that I just bought a lawnmower.
posted by localroger at 6:02 AM on June 29, 2014
This. Not so much a Facebook problem as Amazon, but similar in the degree of snoopiness, the creepy tendency to start getting ads everywhere from anything I've recently looked at in any online store. Helllloooo, marketers, it's a little late to show me ads for lawnmowers when the Clue you have snarfed up is that I just bought a lawnmower.
posted by localroger at 6:02 AM on June 29, 2014
Facebook hasn't really figured out how to make money from us
It's not for want of trying, though, and they are surely getting more sophisticated about it. And when they find manipulation schemes which work, they'll have very strong incentives to avoid publishing a detailed description of their methods.
I think this current paper must have been in the works for some time and Facebook must already have better methods, because the sentiment analysis technique used in the paper is extremely primitive, so weak that it's almost like they published it to assure people that they don't know how to do anything useful with their data. When they got that tiny effect size, why didn't they repeat the analysis with a more sophisticated detector? Professor Neural-Net, Yann LeCun is running their AI group, so it's not like they lack the skills for it.
posted by Estragon at 6:07 AM on June 29, 2014 [1 favorite]
It's not for want of trying, though, and they are surely getting more sophisticated about it. And when they find manipulation schemes which work, they'll have very strong incentives to avoid publishing a detailed description of their methods.
I think this current paper must have been in the works for some time and Facebook must already have better methods, because the sentiment analysis technique used in the paper is extremely primitive, so weak that it's almost like they published it to assure people that they don't know how to do anything useful with their data. When they got that tiny effect size, why didn't they repeat the analysis with a more sophisticated detector? Professor Neural-Net, Yann LeCun is running their AI group, so it's not like they lack the skills for it.
posted by Estragon at 6:07 AM on June 29, 2014 [1 favorite]
You have the right to know if you were part of this research. Contact the chairs of the departments of the academic authors and inquire as to whether you were part of the study and which condition you were in.
posted by srboisvert at 6:47 AM on June 29, 2014 [6 favorites]
posted by srboisvert at 6:47 AM on June 29, 2014 [6 favorites]
I don't see much validity or insight in editor Fisk's email reply. Apparently, consent can be waived if the experimental sideeffects are "judged" to be within the noise range of the population. What precedent is there for this strange concept? I don't even understand the logic of this argument. Death and sickness fit "everyday experiences", but no medical researcher gets the power to interfere with which of their subjects gets to survive in the name of science, no matter how inconsequential their lives may seem to you.
You need to look at Fiske's most significant research. You'll laugh when you see what it is about. Basically high status people don't give much of a shit about lower status people while low status people really pay attention to high status people.
posted by srboisvert at 6:53 AM on June 29, 2014
You need to look at Fiske's most significant research. You'll laugh when you see what it is about. Basically high status people don't give much of a shit about lower status people while low status people really pay attention to high status people.
posted by srboisvert at 6:53 AM on June 29, 2014
Let me guess: you don't use twitter.
Or any google product, like search or gmail. (E.g., let's make gmail have an API that's not POP or IMAP!)
I should add, coming back to the thread after the long discussion about ethics, that my severe lack of surprise about Facebook tampering with the feed for research doesn't mean I think it's ethical. It's just that I've never seen anything that suggested to me that Zuckerberg would care or would foster an regime in his company of caring about the ethical considerations of the feed.
FWIW, I feel the same way about Twitter, Amazon, Google, etc. It's just that Facebook is very blatant about not caring so they're the ones who got caught.
posted by immlass at 7:29 AM on June 29, 2014
Or any google product, like search or gmail. (E.g., let's make gmail have an API that's not POP or IMAP!)
I should add, coming back to the thread after the long discussion about ethics, that my severe lack of surprise about Facebook tampering with the feed for research doesn't mean I think it's ethical. It's just that I've never seen anything that suggested to me that Zuckerberg would care or would foster an regime in his company of caring about the ethical considerations of the feed.
FWIW, I feel the same way about Twitter, Amazon, Google, etc. It's just that Facebook is very blatant about not caring so they're the ones who got caught.
posted by immlass at 7:29 AM on June 29, 2014
Something smart from a social scientist: In defense of Facebook.
This person actually gives, as an example of a negative message that might be omitted, not getting informed of someone's mother's death... and then seems to continue on talking as though the discussion is simply about tuning the interface of a web site. Like, whether or not you get informed about someone's death is merely a user interface issue.
It's best to read the comments there, my favorite of which is "This defense is weaker than homeopathic tea." But yeah, I would give the thumbs down to saying that this is "something smart."
posted by XMLicious at 7:31 AM on June 29, 2014 [14 favorites]
There is little doubt that the present backlash will do absolutely nothing to deter Facebook from actually conducting controlled experiments on its users, because A/B testing is a central component of pretty much every major web company’s business strategy at this point...This isn't A/B testing of your perception of a company's product or of their marketing message, this is A/B testing of your perception of your friends and family and your friends and family's lives, A/B testing of reality. How can someone knowledgeable about this stuff breeze by that distinction?
This person actually gives, as an example of a negative message that might be omitted, not getting informed of someone's mother's death... and then seems to continue on talking as though the discussion is simply about tuning the interface of a web site. Like, whether or not you get informed about someone's death is merely a user interface issue.
It's best to read the comments there, my favorite of which is "This defense is weaker than homeopathic tea." But yeah, I would give the thumbs down to saying that this is "something smart."
posted by XMLicious at 7:31 AM on June 29, 2014 [14 favorites]
Via Twitter: Any coverage of FB emotional experiment should note Cornell (participant) coverage a month ago saying it was Army funded (see bottom)
posted by furtive at 7:33 AM on June 29, 2014 [2 favorites]
posted by furtive at 7:33 AM on June 29, 2014 [2 favorites]
So now that we've established that Facebook did the equivalent of switching soy for milk without worrying about the lactose intolerant, does it follow that Facebook has a moral obligation to make it harder for negative posts to appear in people's feeds? Aren't they enabling harm by letting the Debbie Downers of the world post freely? Don't they have an obligation to be concerned about the mental health of their users?
posted by leopard at 7:48 AM on June 29, 2014
posted by leopard at 7:48 AM on June 29, 2014
Ethical issues aside (which are admittedly hard to put aside, as they are quite substantial and disturbing), the methodology, results, and write-up of this study raise quite a few other issues in my mind…
They have conducted an experiment to measure the effects of changing social behavior, but they really did not need to do this. They have a ridiculous amount of *actual* behavior — spontaneous and natural and raw, no A/B testing necessary! They could have just analyzed this and not had nearly the same ethical issues as their chosen methodology drags in. Somebody suggested a better methodology upthread; it's one of many possibilities using what Facebook already has, in spades. Including all the demographic data (age, gender, statuses) to boot — which they seemed to have completely ignored in their current study. Surely age at the very least is correlated with participants emotional states relative to others? Or put another way, Facebook users aren't a monolith group who respond to "emotional contagion" effects in exactly the same way, across all demographics and levels of social involvement, so why should we expect them to respond as such? At any rate, usually an experiment like this contains a section that justifies and explains why a particular methodology was chosen above all others.
But even as an experiment, conducted as they have done, I wonder if it measured the thing they intended to measure? At least as I understand it from the write-up. They wanted to measure the effect of positive vs. negative statuses on subsequent statuses by other users in the friendship group. But I think what they got (if I'm understanding this correctly) is the effect of a sudden decline (or increase) in a friend's status. So the effect of a *change* in a friend's posting behavior (and whether the direction of the change, positive or negative, had an effect on others' future behavior). Which, yeah, if a friend who posts quite a lot of cheery stuff suddenly took a nosedive into darkness (or vice versa), then it may probably have an effect on others. Depending on the person of course, their relationship to me, their involvement in the network and the friend group and Facebook in general, as well as the general type of statuses they post and whether or not they generally post happy ones or sad ones or personal ones or a balanced mix of positive and negative, therefore making a difference both noticeable and measurable. In other words, who the heck are these 600,000+ subjects (in general, not names of course) and what are they like? What were the sampling procedures? Criteria? This should be in the reporting of the methodology, under something akin to 'population sample characteristics/demographics and procedures'.
Also, related to subjects, the only way I could see this actually working is if there is just one manipulated feed per social network group. And the groups should be distinct from each other. Otherwise you have confounding effects where, say, Friend A in my FB network has a manipulated feed and so does friend B. If they're manipulated in the same direction then there's a group effect where, say, my social network of friends seems really negative (or positive). If they're manipulated in different directions then my statuses can't be accurately measured (am I being influenced by Friend A or by Friend B? Or both?). Therefore, you must have one friend as the main variable and the rest of their network whose reactions are observed as a result of the manipulation of the main feed. As such, the actual number of manipulated feeds is MUCH smaller than 600,000+, because you need to count all of the "reactive friends" as study subjects too. And we have to assume that the manipulated feed friend A) has a substantial number of friends in their network, B) is active on facebook, C) posts enough statuses that their feed can be filtered.
Which brings me to the problem that somebody else pointed to upthread, with the LIWC algorithm. The article is here, and it basically delves into the issues with sentiment analysis and why the tool they used to determine whether statuses were 'positive' or 'negative' was both inaccurate and very ill-suited for analyzing FB statuses.
Along the lines of sentiment analysis difficulties: what about statuses that were neither positive or negative? Or were both? Or ambiguous? Or quotes? Or contained a negation? Or were sarcastic?
I also don't quite understand why, if they want to measure "emotional contagion", they didn't just measure it more directly? As in, seeing whether people who specifically say "I am happy/sad today" or who put their 'happy' or 'sad' emoticon faces on actually influence the behavior of others to do so. Seems to me that you would need to measure that relationship FIRST before you go ahead and measure indirect expressions of mood and emotion.* And especially before you go ahea and measure the manipulation of access to indirect expressions of emotional states. It's so abstracted and ungrounded as it is, that you really can't even be sure that the ridiculously small effect they observed is related to the claim they're making.
The methodology overlooks the context in which the interactions and the algorithm manipulations occurred. The context being that we kind of know who our friends are, what's generally going on in their lives, and how they tend to behave online. So that when you randomly manipulate that, it's very incongruent with how we expect our social worlds to work. Made up example: my friend who just had a baby and whose feed has been chock full of regular happy status updates for the last nine months is all of the sudden only showing me the statuses about unhappiness with expired food? It doesn't make sense with what we know and expect, it's unnatural and we sense it. Subtle changes like this cause us to subconsciously question and mistrust not our friends, but the environment in which we're interacting with them.
There was no qualitative analysis done at any stage it seems? This is really needed in studies involving big data in social science research, where you may have correlations but the aggregate of data obscures the entire range of possibities within. It's transparent and helpful to provide further contextualization of your data outcomes with qualitative examples. Basically, if you're adding up a bunch of examples of X, you need to know what the majority of those X examples actually look like. What are they? How were the ambiguous cases dealt with? In this particular case, what does a typical positive or negative status look like? How typical is it? What are the typical responses to those statuses? How typical are they? What data were discarded? On what basis?
What about reporting info about the study steps and analysis procedures, time span, and other pertinent information about how this was conducted? Studies -- especially experiments -- are should be designed with the ability to be replicated in mind (assuming one would have access to the subjects/tools/etc. used to conduct the study). This study, written as is, leaves way too many questions unanswered, imho. And too much of what is there raises new issues, which aren't explained anywhere (e.g., What were the words used to indicate positive and negative states? How were the subjects determined? What about the discussion of the results? Possible other or confounding factors? Limitations and generalizability? And the internal and external validity of the results and statistical measures?**).
There's so much to say, but it seems almost pointless. And I realize not everything that has been brought up in this thread can possibly be responded to within publication word or space constraints. However, when you're dealing with sensitive data such as these, including deception and questionable methods and ethics, bases should be covered. Minimally at least. And so I'm starting to feel like this was maybe a George Washington Bridge traffic study all along.
*Fortunately there is a wealth of reputable, published mixed-methods research on this topic, much of it highlighting the complexity of measuring such things and justifying the need for qualitative analysis alongside 'big data' presentations.
**I know they got into this a little teeny bit, but it seems to me they could have explained more about why their effect size was so small, and how they could have gotten better results (perhaps looked along demographic factors, where effect sizes would likely have been bigger in younger networks or in denser, multi-plex networks)…as is, I felt that they went with a big claim based on a tiny result and didn't fairly hedge it; give us something to make us believe, convince me.
posted by iamkimiam at 7:52 AM on June 29, 2014 [25 favorites]
They have conducted an experiment to measure the effects of changing social behavior, but they really did not need to do this. They have a ridiculous amount of *actual* behavior — spontaneous and natural and raw, no A/B testing necessary! They could have just analyzed this and not had nearly the same ethical issues as their chosen methodology drags in. Somebody suggested a better methodology upthread; it's one of many possibilities using what Facebook already has, in spades. Including all the demographic data (age, gender, statuses) to boot — which they seemed to have completely ignored in their current study. Surely age at the very least is correlated with participants emotional states relative to others? Or put another way, Facebook users aren't a monolith group who respond to "emotional contagion" effects in exactly the same way, across all demographics and levels of social involvement, so why should we expect them to respond as such? At any rate, usually an experiment like this contains a section that justifies and explains why a particular methodology was chosen above all others.
But even as an experiment, conducted as they have done, I wonder if it measured the thing they intended to measure? At least as I understand it from the write-up. They wanted to measure the effect of positive vs. negative statuses on subsequent statuses by other users in the friendship group. But I think what they got (if I'm understanding this correctly) is the effect of a sudden decline (or increase) in a friend's status. So the effect of a *change* in a friend's posting behavior (and whether the direction of the change, positive or negative, had an effect on others' future behavior). Which, yeah, if a friend who posts quite a lot of cheery stuff suddenly took a nosedive into darkness (or vice versa), then it may probably have an effect on others. Depending on the person of course, their relationship to me, their involvement in the network and the friend group and Facebook in general, as well as the general type of statuses they post and whether or not they generally post happy ones or sad ones or personal ones or a balanced mix of positive and negative, therefore making a difference both noticeable and measurable. In other words, who the heck are these 600,000+ subjects (in general, not names of course) and what are they like? What were the sampling procedures? Criteria? This should be in the reporting of the methodology, under something akin to 'population sample characteristics/demographics and procedures'.
Also, related to subjects, the only way I could see this actually working is if there is just one manipulated feed per social network group. And the groups should be distinct from each other. Otherwise you have confounding effects where, say, Friend A in my FB network has a manipulated feed and so does friend B. If they're manipulated in the same direction then there's a group effect where, say, my social network of friends seems really negative (or positive). If they're manipulated in different directions then my statuses can't be accurately measured (am I being influenced by Friend A or by Friend B? Or both?). Therefore, you must have one friend as the main variable and the rest of their network whose reactions are observed as a result of the manipulation of the main feed. As such, the actual number of manipulated feeds is MUCH smaller than 600,000+, because you need to count all of the "reactive friends" as study subjects too. And we have to assume that the manipulated feed friend A) has a substantial number of friends in their network, B) is active on facebook, C) posts enough statuses that their feed can be filtered.
Which brings me to the problem that somebody else pointed to upthread, with the LIWC algorithm. The article is here, and it basically delves into the issues with sentiment analysis and why the tool they used to determine whether statuses were 'positive' or 'negative' was both inaccurate and very ill-suited for analyzing FB statuses.
Along the lines of sentiment analysis difficulties: what about statuses that were neither positive or negative? Or were both? Or ambiguous? Or quotes? Or contained a negation? Or were sarcastic?
I also don't quite understand why, if they want to measure "emotional contagion", they didn't just measure it more directly? As in, seeing whether people who specifically say "I am happy/sad today" or who put their 'happy' or 'sad' emoticon faces on actually influence the behavior of others to do so. Seems to me that you would need to measure that relationship FIRST before you go ahead and measure indirect expressions of mood and emotion.* And especially before you go ahea and measure the manipulation of access to indirect expressions of emotional states. It's so abstracted and ungrounded as it is, that you really can't even be sure that the ridiculously small effect they observed is related to the claim they're making.
The methodology overlooks the context in which the interactions and the algorithm manipulations occurred. The context being that we kind of know who our friends are, what's generally going on in their lives, and how they tend to behave online. So that when you randomly manipulate that, it's very incongruent with how we expect our social worlds to work. Made up example: my friend who just had a baby and whose feed has been chock full of regular happy status updates for the last nine months is all of the sudden only showing me the statuses about unhappiness with expired food? It doesn't make sense with what we know and expect, it's unnatural and we sense it. Subtle changes like this cause us to subconsciously question and mistrust not our friends, but the environment in which we're interacting with them.
There was no qualitative analysis done at any stage it seems? This is really needed in studies involving big data in social science research, where you may have correlations but the aggregate of data obscures the entire range of possibities within. It's transparent and helpful to provide further contextualization of your data outcomes with qualitative examples. Basically, if you're adding up a bunch of examples of X, you need to know what the majority of those X examples actually look like. What are they? How were the ambiguous cases dealt with? In this particular case, what does a typical positive or negative status look like? How typical is it? What are the typical responses to those statuses? How typical are they? What data were discarded? On what basis?
What about reporting info about the study steps and analysis procedures, time span, and other pertinent information about how this was conducted? Studies -- especially experiments -- are should be designed with the ability to be replicated in mind (assuming one would have access to the subjects/tools/etc. used to conduct the study). This study, written as is, leaves way too many questions unanswered, imho. And too much of what is there raises new issues, which aren't explained anywhere (e.g., What were the words used to indicate positive and negative states? How were the subjects determined? What about the discussion of the results? Possible other or confounding factors? Limitations and generalizability? And the internal and external validity of the results and statistical measures?**).
There's so much to say, but it seems almost pointless. And I realize not everything that has been brought up in this thread can possibly be responded to within publication word or space constraints. However, when you're dealing with sensitive data such as these, including deception and questionable methods and ethics, bases should be covered. Minimally at least. And so I'm starting to feel like this was maybe a George Washington Bridge traffic study all along.
*Fortunately there is a wealth of reputable, published mixed-methods research on this topic, much of it highlighting the complexity of measuring such things and justifying the need for qualitative analysis alongside 'big data' presentations.
**I know they got into this a little teeny bit, but it seems to me they could have explained more about why their effect size was so small, and how they could have gotten better results (perhaps looked along demographic factors, where effect sizes would likely have been bigger in younger networks or in denser, multi-plex networks)…as is, I felt that they went with a big claim based on a tiny result and didn't fairly hedge it; give us something to make us believe, convince me.
posted by iamkimiam at 7:52 AM on June 29, 2014 [25 favorites]
So now that we've established that Facebook did the equivalent of switching soy for milk without worrying about the lactose intolerant, does it follow that Facebook has a moral obligation to make it harder for negative posts to appear in people's feeds? Aren't they enabling harm by letting the Debbie Downers of the world post freely? Don't they have an obligation to be concerned about the mental health of their users?
Well I guess this is an improvement... this is up to, what, the equivalent of about 70 years ago in medical ethics? Getting to the point of asking, "Can we withhold all bad news and just play God?"
posted by XMLicious at 8:29 AM on June 29, 2014
Yes, leopard, Facebook has an ethical obligation to artificially suppress the full range of human emotion because bad news and bad feelings are yucky and people shouldn't have to be brought down by news of a friend's or loved one's death or other bad news. Social support for the aggrieved is irrelevant; we'll be better off as a society if people with problems and bad news are just completely isolated and shunned before they can infect the rest of us with their icky negativity.
posted by saulgoodman at 8:34 AM on June 29, 2014 [2 favorites]
posted by saulgoodman at 8:34 AM on June 29, 2014 [2 favorites]
So on one hand some unethical researchers deliberately exposed people to poison without their consent, but on the other hand poison is a naturally occurring substance and only God has the right to determine how it's distributed -- or rather, only profit seeking entities with no interest in social science research have a right to determine how it's distributed?
posted by leopard at 9:00 AM on June 29, 2014
posted by leopard at 9:00 AM on June 29, 2014
Here's what I would take from all this, were I Facebook:
This study shows that people's posts resonate with whatever their seeing in their feeds; the effect is weak, but (thanks to their crazy large N) not insignificant. But they also got another valuable piece of data: they'll get blowback from the public if they blatantly censor content to manipulate users' emotional states.
However, along with arbitrarily choosing the posts that appear in user's feeds -- which was already the status quo prior to this study, albeit not for the explicit purpose of deliberately depressing people -- Facebook also re-orders them. This is a more obvious manipulation on FB's part, since the "sort by top stories/date" option appears at the top of one's feed. It's also likely to be more palatable to users, since a) weird re-ordering is already the default; b) FB isn't "hiding" any content from you simply by re-ordering (unlike their filtration); and c) one can always choose to un-sort.
Which gets me to this: FB's interest is in selling ads, which means they need to make a case that FB is a better advertising vehicle than other media streams. So here's the answer: re-order the streams so that the posts with positive emotional content are stacked right before the ad. Hell, build up to the ad, like some sinister commercial foreplay. Then taper off the positive posts so that the farther the user scrolls from the ad, the worse the mood of their feed becomes. All the content's there, bizarrely re-ordered as always, nothing unusual here....
Except that you feel good every time you see that soma ad. Your friends seem to feel good around that soma ad, too! Isn't it exciting? So much fun! Maybe you should click that ad -- what the heck! But if you scroll down -- too awful! Deaths, divorces, layoffs, injustices... Wouldn't you like to scroll back up around that ad to where your friends' posts are happy and linger near it a bit more?
Why you don't take soma when you have these dreadful ideas of yours? You'd forget all about them. And instead of feeling miserable, you'd be jolly. So jolly.
posted by Westringia F. at 9:31 AM on June 29, 2014 [4 favorites]
This study shows that people's posts resonate with whatever their seeing in their feeds; the effect is weak, but (thanks to their crazy large N) not insignificant. But they also got another valuable piece of data: they'll get blowback from the public if they blatantly censor content to manipulate users' emotional states.
However, along with arbitrarily choosing the posts that appear in user's feeds -- which was already the status quo prior to this study, albeit not for the explicit purpose of deliberately depressing people -- Facebook also re-orders them. This is a more obvious manipulation on FB's part, since the "sort by top stories/date" option appears at the top of one's feed. It's also likely to be more palatable to users, since a) weird re-ordering is already the default; b) FB isn't "hiding" any content from you simply by re-ordering (unlike their filtration); and c) one can always choose to un-sort.
Which gets me to this: FB's interest is in selling ads, which means they need to make a case that FB is a better advertising vehicle than other media streams. So here's the answer: re-order the streams so that the posts with positive emotional content are stacked right before the ad. Hell, build up to the ad, like some sinister commercial foreplay. Then taper off the positive posts so that the farther the user scrolls from the ad, the worse the mood of their feed becomes. All the content's there, bizarrely re-ordered as always, nothing unusual here....
Except that you feel good every time you see that soma ad. Your friends seem to feel good around that soma ad, too! Isn't it exciting? So much fun! Maybe you should click that ad -- what the heck! But if you scroll down -- too awful! Deaths, divorces, layoffs, injustices... Wouldn't you like to scroll back up around that ad to where your friends' posts are happy and linger near it a bit more?
Why you don't take soma when you have these dreadful ideas of yours? You'd forget all about them. And instead of feeling miserable, you'd be jolly. So jolly.
posted by Westringia F. at 9:31 AM on June 29, 2014 [4 favorites]
(Obviously, the goal of all advertising is to make you associate a positive mood with a product. The difference here would be co-opting your friends' status posts in the service of that, instead of showing you random happy strangers.)
posted by Westringia F. at 9:32 AM on June 29, 2014 [1 favorite]
posted by Westringia F. at 9:32 AM on June 29, 2014 [1 favorite]
So on one hand some unethical researchers deliberately exposed people to poison without their consent, but on the other hand poison is a naturally occurring substance and only God has the right to determine how it's distributed -- or rather, only profit seeking entities with no interest in social science research have a right to determine how it's distributed?
If you arrogate to yourself the decision to add and remove lactose as an ingredient in things in what other people consume according to your own wishes, especially as a way to gain some sort of traction over those people, instead of just fucking telling them what has lactose in it and what doesn't or otherwise helping them to understand their medical choices, that's when you're playing God.
Whatever sort of dilemma you're trying to construct where Facebook or the researchers knowing which communications will tend to have positive or negative emotional effects creates some ethical justification to deceive or manipulate people instead of just telling them that information, is not a dilemma and does not create any such justification.
posted by XMLicious at 9:44 AM on June 29, 2014 [2 favorites]
If you arrogate to yourself the decision to add and remove lactose as an ingredient in things in what other people consume according to your own wishes, especially as a way to gain some sort of traction over those people, instead of just fucking telling them what has lactose in it and what doesn't or otherwise helping them to understand their medical choices, that's when you're playing God.
Whatever sort of dilemma you're trying to construct where Facebook or the researchers knowing which communications will tend to have positive or negative emotional effects creates some ethical justification to deceive or manipulate people instead of just telling them that information, is not a dilemma and does not create any such justification.
posted by XMLicious at 9:44 AM on June 29, 2014 [2 favorites]
And if my speculation is anything like what FB has in mind, possibly the most valuable outcome of this study is that the post affect classification algorithms work. Not perfectly, perhaps, but well enough to produce an effect, and probably well enough to justify dev time to implement them & to include them in whatever prospectus they parade to advertisers/funders. How wonderful that they got academics to establish that for them!
posted by Westringia F. at 10:15 AM on June 29, 2014
posted by Westringia F. at 10:15 AM on June 29, 2014
Facebook already plays God. They're an advertising company under pressure to increase profits. There are people whose jobs it is to figure out how to get people to play more stupid video games. Or click on more ads. Or log in more often. Or make more connections. This is not some big secret either, they are publicly held and investors are explicitly expecting massive increases in revenues. This is not going to happen organically. And I am not aware of any movement to force Facebook to reveal to the public exactly how they are going to manipulate people to better their bottom line.
I'm not trying to "construct" a dilemma. People have a lot of inconsistent moral beliefs. That's just life.
posted by leopard at 10:16 AM on June 29, 2014
I'm not trying to "construct" a dilemma. People have a lot of inconsistent moral beliefs. That's just life.
posted by leopard at 10:16 AM on June 29, 2014
But it's not just FB. All of these companies run experiments on their websites using the A/B method. They just don't have guts or wherewithall to publish their findings. What's different here is that FB seemed to be asking a question that was not directly related to getting people to read ads specifically tailored for them.....at least not yet.
posted by bluesky43 at 10:23 AM on June 29, 2014
posted by bluesky43 at 10:23 AM on June 29, 2014
bluesky43, the significant difference in this case is that FB was interfering with communications between their customers, not just tuning their own communications with their customers.
posted by Estragon at 10:38 AM on June 29, 2014 [1 favorite]
posted by Estragon at 10:38 AM on June 29, 2014 [1 favorite]
Can we do a study to see why certain people shrug off malevolent behavior on the part of corporations and organizations as "just something that happens"?
"Yeah, well, if you live in a city with a chemical plant, you're going to die from a negligent release of poison, it's just what happens" or "yeah, don't drink your well water, we have fracking around here, pretty inevitable, really" or "yeah, you signed up for a social sharing site so you could keep up with your friends, sorry they conducted an experiment on you."
It reeks of privilege, namely "well, I'm not depressed, so" or "well, I live in a place where we can afford to outsource the dirty stuff needed to keep the lights on".
We have this admiration of avarice in this society which is awful. It's not only that a company makes money, but they do everything in their power to make as much as possible, all laws and ethics be damned, and somehow a lot of people find that bonerific.
posted by maxwelton at 10:46 AM on June 29, 2014 [8 favorites]
"Yeah, well, if you live in a city with a chemical plant, you're going to die from a negligent release of poison, it's just what happens" or "yeah, don't drink your well water, we have fracking around here, pretty inevitable, really" or "yeah, you signed up for a social sharing site so you could keep up with your friends, sorry they conducted an experiment on you."
It reeks of privilege, namely "well, I'm not depressed, so" or "well, I live in a place where we can afford to outsource the dirty stuff needed to keep the lights on".
We have this admiration of avarice in this society which is awful. It's not only that a company makes money, but they do everything in their power to make as much as possible, all laws and ethics be damned, and somehow a lot of people find that bonerific.
posted by maxwelton at 10:46 AM on June 29, 2014 [8 favorites]
My take on this is why Facebook does not offer an advertising free vehicle for a monthly/annual subscription fee. I use FB extensively to stay in touch with friends/family in the US when in Ireland and vice-versa. It creates a sense of connectedness with my daughters, grandchildren, other family and friends that I have not been able to replicate through other means. I also enjoy the limited subscriptions (Reich, Moyers, Science Friday etc). In other words offer one primarily designed to meet the needs of its customers-those paying for the service. One that creates a business model based on expectations of reasonable privacy and freedom from "manipulation".
If you wish to use FB for free then you will be part of an advertising/payer driven model and will give up some additional privacy and control. As to whether this is ethical or unethical--probably borderline unethical but I seriously question how anyone can expect to receive a product as complex such as FB ( with its elaborate infrastructure) without paying a price. I see no reason one should expect FB to be a public utility/service where everyone receives the same service. As someone posted before--there is no such thing as a free service. Outrage over commercial exploitation rings a false note with me--I really do not see the deep ethical issue when placed side by side with the vast majority of obvious and not so obvious manipulation via advertising on TV, magazines, radio, public spaces etc.
posted by rmhsinc at 11:48 AM on June 29, 2014 [1 favorite]
If you wish to use FB for free then you will be part of an advertising/payer driven model and will give up some additional privacy and control. As to whether this is ethical or unethical--probably borderline unethical but I seriously question how anyone can expect to receive a product as complex such as FB ( with its elaborate infrastructure) without paying a price. I see no reason one should expect FB to be a public utility/service where everyone receives the same service. As someone posted before--there is no such thing as a free service. Outrage over commercial exploitation rings a false note with me--I really do not see the deep ethical issue when placed side by side with the vast majority of obvious and not so obvious manipulation via advertising on TV, magazines, radio, public spaces etc.
posted by rmhsinc at 11:48 AM on June 29, 2014 [1 favorite]
Everyone should read iamkimiam's fantastic comment about the methodological issues of this study.
From the network science side, I want to highlight this:
More generally, they failed to account in any way for the underlying network. Correlations and clique structure have the potential to confound the results. There's no attempt to account for degree (ie, # of friends), which might moderate the effect. Nor is there any attempt to account for the strength of ties to the people whose posts were selected to appear in manipulated feeds (something which could have been assessed by the level of engagement between users -- mutual liking, commenting, ...). &c, &c. Instead, they've taken the world's best known social network -- the canonical "Social Network" -- and treated its nodes as independent observations.
Even if the data had been collected in a completely unimpeachable way, their analysis is disappointing. It's disappointing, too, that whoever reviewed it for PNAS didn't call them on it. Worse still is the fact that they did not even raise these matters in the discussion. How on Earth was this accepted?!
posted by Westringia F. at 11:52 AM on June 29, 2014 [9 favorites]
From the network science side, I want to highlight this:
Otherwise you have confounding effects where, say, Friend A in my FB network has a manipulated feed and so does friend B. If they're manipulated in the same direction then there's a group effect where, say, my social network of friends seems really negative (or positive). If they're manipulated in different directions then my statuses can't be accurately measured (am I being influenced by Friend A or by Friend B? Or both?).It's not exactly that -- the question they're asking is whether manipulating your feed (which posts you see) changes your posting behavior, not whether manipulating your friends' feeds changes your posting behavior -- but it's problematic for similar reasons. Specifically, if you happen to be in a community where the majority of your friends are in the negative arm and are starting to post more negative things, the manipulation engine will have a lot more negativity and less positivity to choose from when manipulating your feed. One can imagine this becoming a feedback loop (negative if you're supposed to be in the positive arm, positive if you're supposed to be in the negative arm). So as iamkimiam rightly pointed out, the exposures of people in a given arm are not uniform, but rather depend on the groups into which their friends have been randomized. And one can't simply appeal to the CLT and claim this will be a wash, since the even though your friends may be randomized into arms, the correlations between them (and between them and you) will depend on a graph topology that is non-uniform and non-random.
More generally, they failed to account in any way for the underlying network. Correlations and clique structure have the potential to confound the results. There's no attempt to account for degree (ie, # of friends), which might moderate the effect. Nor is there any attempt to account for the strength of ties to the people whose posts were selected to appear in manipulated feeds (something which could have been assessed by the level of engagement between users -- mutual liking, commenting, ...). &c, &c. Instead, they've taken the world's best known social network -- the canonical "Social Network" -- and treated its nodes as independent observations.
Even if the data had been collected in a completely unimpeachable way, their analysis is disappointing. It's disappointing, too, that whoever reviewed it for PNAS didn't call them on it. Worse still is the fact that they did not even raise these matters in the discussion. How on Earth was this accepted?!
posted by Westringia F. at 11:52 AM on June 29, 2014 [9 favorites]
No one should have to expect having their psychology deliberately manipulated to potentially harm them as a cost of access to any service free or otherwise. It is not compatible with the health of a society to tolerate this kind of social violence, period.
posted by saulgoodman at 12:11 PM on June 29, 2014 [19 favorites]
posted by saulgoodman at 12:11 PM on June 29, 2014 [19 favorites]
Facebook already plays God. They're an advertising company under pressure to increase profits... I'm not trying to "construct" a dilemma. People have a lot of inconsistent moral beliefs. That's just life.
You haven't actually demonstrated that any moral beliefs related to this are inconsistent, though. It's exactly because Facebook obviously has no ethical qualms about doing anything at all to its users and because the high and mighty of the world will quite readily poison people or plunge the world into a financial crisis or start a war for a buck that academic institutions or researchers saying "Welp, Facebook already has that whole ethics thing properly handled" is preposterous.
Being able to directly en masse manipulate the way individuals perceive the world down to the point of filtering their interpersonal communication is a capability that will very rapidly exceed any quantifiable dollar value, especially once it can be coordinated in real-time across multiple channels of communication, just like the value of ubiquitous surveillance did; and at that point it will no longer obey conventions of corporate behavior. All this handwavy "that's just the way it is" and "that's just commercialism" stuff, in the interest of legitimizing and normalizing the filtering of communication the moment companies get their tentacles into any technological interface between people, is incredibly stupid and short-sighted and is going to contribute to things soon not being just the way they are now, the loss of a great deal more than some interesting data.
We can maybe eventually come back from the asymmetric position the public is in with surveillance by insisting on transparency and that the surveillance be pushed into every dark corner and be accessible to everyone, but if we accept and embrace and help along the selective and customized filtering by third parties of the very ways we communicate with each other and see the world, I'm not sure we can ever come back from that.
posted by XMLicious at 12:29 PM on June 29, 2014 [3 favorites]
I'm not trying to "construct" a dilemma. People have a lot of inconsistent moral beliefs. That's just life.
Have you ever considered asking your doctor if SELF-EXAMINATION is right for you?
posted by JHarris at 12:54 PM on June 29, 2014
Have you ever considered asking your doctor if SELF-EXAMINATION is right for you?
posted by JHarris at 12:54 PM on June 29, 2014
So what is to be done? Facebook already filters interpersonal communication, there's nothing "natural" about a News Feed or Timeline. I'm just stating a fact. I guess you can scold me for stating this fact, say that I am part of the problem and that it is my complacency that enables Facebook to do what it does, but oddly enough that doesn't actually change anything.
People have grumbled in the past about Facebook messing with people's feeds, but I've never seen anyone call their actions immoral or unethical, just stupid and annoying. So I guess we can avoid a brave new world by banning academic collaboration with Facebook, and then people will go back to grumbling on Facebook about how much they hate Facebook. Problem solved.
posted by leopard at 1:03 PM on June 29, 2014 [1 favorite]
People have grumbled in the past about Facebook messing with people's feeds, but I've never seen anyone call their actions immoral or unethical, just stupid and annoying. So I guess we can avoid a brave new world by banning academic collaboration with Facebook, and then people will go back to grumbling on Facebook about how much they hate Facebook. Problem solved.
posted by leopard at 1:03 PM on June 29, 2014 [1 favorite]
They could, you know, just not mess with people's feeds in ways designed to make them sad or otherwise harm them. That might be something.
posted by saulgoodman at 1:17 PM on June 29, 2014 [7 favorites]
posted by saulgoodman at 1:17 PM on June 29, 2014 [7 favorites]
saulgoodman "They could, you know, just not mess with people's feeds in ways designed to make them sad or otherwise harm them. That might be something."--or make them happy. Most media, advertising etc does exactly this all the time--seriously--without efforts to either modify members behavior towards advertisers, charge a subscription fee or become a regulated and tax supported utility do you suggest they make a profit and stay in business. One could suggest they only accept creative and compelling advertising (what ever that is). I do think they should have released a statement before authorizing the research that they would be conducting trials( randomly selected sub sample of x participants out of a universe of Y) and that anyone who did not want to be a possible subject could opt out of FB for that period. But some of the posts suggesting that this manipulation could significantly precipitate/cause major emotional turmoil is hyperbole--I would suggest that anyone this fragile would have long been taken over the edge by any number of movies, novels, commercial advertising or just day to day life.
posted by rmhsinc at 1:47 PM on June 29, 2014 [1 favorite]
posted by rmhsinc at 1:47 PM on June 29, 2014 [1 favorite]
If Facebook instituted a policy of making it less likely for the posts of Debbie Downers to reach a broad audience, and Mark Zuckerberg gave a speech defending this policy by saying that he had made this decision because of his great sensitivity to the emotional well-being of the Facebook user base, and that he simply could not stand by while his billion dollar business inflicted so much harm and sadness on billions of people across the globe -- would anyone here consider this some sort of grand humanitarian gesture?
(Rhetorical question of course, outrage is a hydra.)
posted by leopard at 1:59 PM on June 29, 2014
(Rhetorical question of course, outrage is a hydra.)
posted by leopard at 1:59 PM on June 29, 2014
I would say that it was a nice goal, but hugely uninformed.
posted by iamkimiam at 2:44 PM on June 29, 2014 [4 favorites]
posted by iamkimiam at 2:44 PM on June 29, 2014 [4 favorites]
This is all false equivocation. The explicit point of the whole exercise in this case was to see if they could make certain targeted users sad by consciously manipulating their feeds.
Just because you can blur the dividing line between this and other things with overthinking doesn't mean they are the same things.
posted by saulgoodman at 2:51 PM on June 29, 2014
Just because you can blur the dividing line between this and other things with overthinking doesn't mean they are the same things.
posted by saulgoodman at 2:51 PM on June 29, 2014
I wrote up my criticism of this study earlier this week (that someone else linked to already earlier in this thread). It boils down to:
- Use of the wrong tool for the job. The measurement tool, the LIWC 2007, was never designed to accurately measure the emotional tone of such small snippets of informal text. It doesn't stop researchers from using it for this purpose, but it raises significant issues about the validity of their analysis data.
- No actual, independent measurement of mood or emotion was done. The researchers should have done a pilot study first to ensure their assumptions about mood of the users they were investigating were actually based in reality. They didn't do this, and so no actual measurement of mood was conducted of users.
- Effect size so small as to be meaningless. The effect sizes the researchers found were, in my opinion, so tiny as to be negligible. Nobody got sad or happy in this study from their data manipulation. In fact, I'm surprised the journal even accepted this research for publication given the tiny effect sizes. It basically means the researchers found something that was a statistical blip -- but had no real-world meaning.
posted by docjohn at 3:08 PM on June 29, 2014 [3 favorites]
- Use of the wrong tool for the job. The measurement tool, the LIWC 2007, was never designed to accurately measure the emotional tone of such small snippets of informal text. It doesn't stop researchers from using it for this purpose, but it raises significant issues about the validity of their analysis data.
- No actual, independent measurement of mood or emotion was done. The researchers should have done a pilot study first to ensure their assumptions about mood of the users they were investigating were actually based in reality. They didn't do this, and so no actual measurement of mood was conducted of users.
- Effect size so small as to be meaningless. The effect sizes the researchers found were, in my opinion, so tiny as to be negligible. Nobody got sad or happy in this study from their data manipulation. In fact, I'm surprised the journal even accepted this research for publication given the tiny effect sizes. It basically means the researchers found something that was a statistical blip -- but had no real-world meaning.
posted by docjohn at 3:08 PM on June 29, 2014 [3 favorites]
saulgoodman--in all seriousness I don't understand what "false equivocation" and "blur the dividing line between this and other things with overthinking" means in your response. Please explain if you wish/ or don't. I just do not understand what you mean or to what you are referring.
posted by rmhsinc at 3:11 PM on June 29, 2014
posted by rmhsinc at 3:11 PM on June 29, 2014
A solid observation from The Joy of Tech
from Facebook Blue to Clockwork Orange
posted by oneswellfoop at 3:47 PM on June 29, 2014
from Facebook Blue to Clockwork Orange
posted by oneswellfoop at 3:47 PM on June 29, 2014
If Facebook instituted a policy of making it less likely for the posts of Debbie Downers to reach a broad audience, and Mark Zuckerberg gave a speech defending this policy by saying that he had made this decision because of his great sensitivity to the emotional well-being of the Facebook user base, and that he simply could not stand by while his billion dollar business inflicted so much harm and sadness on billions of people across the globe -- would anyone here consider this some sort of grand humanitarian gesture?
No, I'm against Facebook fiddling with what I see from people and groups and such that I've already decided I want to follow. Deceit in the service of a greater good is still deceit.
I know they totally already do this fiddling for a panoply of reasons. I'm still against it.
posted by Etrigan at 4:37 PM on June 29, 2014 [2 favorites]
No, I'm against Facebook fiddling with what I see from people and groups and such that I've already decided I want to follow. Deceit in the service of a greater good is still deceit.
I know they totally already do this fiddling for a panoply of reasons. I'm still against it.
posted by Etrigan at 4:37 PM on June 29, 2014 [2 favorites]
So it turns out, according to Forbes, that the study was not actually run through a university IRB at all, but rather and "internal Facebook review." If this is true, it makes me even more dismayed at the academic researchers involved in the study. It is all the more astonishing given that there were apparently federal dollars partially funding this study. To accept federal funding and bypass the IRB altogether is a pretty huge ethics violation, and the academic researchers should be ashamed. I am appalled at the lengths people in my own discipline will go to in order to get a slice of that sexy, sexy big data pie.
posted by DiscourseMarker at 5:06 PM on June 29, 2014 [14 favorites]
posted by DiscourseMarker at 5:06 PM on June 29, 2014 [14 favorites]
> People have grumbled in the past about Facebook messing with people's feeds, but I've never seen anyone call their actions immoral or unethical, just stupid and annoying. So I guess we can avoid a brave new world by banning academic collaboration with Facebook, ...
This is not inconsistent. We regard how acceptable something is as a matter of degrees, and the thresholds we set for what we consider unacceptable are based on the severity of the consequences of exceeding them.
For both Facebook's own messings-about and for academic studies, a breach of ethical standards can cause people to be directly harmed without informed consent. No question there: the most proximal harm is equivalent. But a breach of ethical standards by an academic study can have further negative consequences that Facebook's own messings-about cannot: it can make people mistrustful of scientists and scientific research. That has ramifications beyond the study population: a society that is cynical about science will be less willing to fund scientific research, less willing to volunteer as consenting participants, and less willing to consider scientific findings.
So there's good reason academic research is held to a strict standard. It's not just a point of pride; it's that the consequences for all of us are just too serious. Personally, I'd love to see FB's messings-about being held to the same high standards as academic work, but taking a tack of "but you were ok when FB did it" completely ignores the larger social context of scientific research.
posted by Westringia F. at 5:10 PM on June 29, 2014 [3 favorites]
This is not inconsistent. We regard how acceptable something is as a matter of degrees, and the thresholds we set for what we consider unacceptable are based on the severity of the consequences of exceeding them.
For both Facebook's own messings-about and for academic studies, a breach of ethical standards can cause people to be directly harmed without informed consent. No question there: the most proximal harm is equivalent. But a breach of ethical standards by an academic study can have further negative consequences that Facebook's own messings-about cannot: it can make people mistrustful of scientists and scientific research. That has ramifications beyond the study population: a society that is cynical about science will be less willing to fund scientific research, less willing to volunteer as consenting participants, and less willing to consider scientific findings.
So there's good reason academic research is held to a strict standard. It's not just a point of pride; it's that the consequences for all of us are just too serious. Personally, I'd love to see FB's messings-about being held to the same high standards as academic work, but taking a tack of "but you were ok when FB did it" completely ignores the larger social context of scientific research.
posted by Westringia F. at 5:10 PM on June 29, 2014 [3 favorites]
saulgoodman "They could, you know, just not mess with people's feeds in ways designed to make them sad or otherwise harm them. That might be something."--or make them happy. Most media, advertising etc does exactly this all the time
The crux of the issue is simply the distinction between "help" and "harm". If they had stuck to manipulating the feeds just in order to make people happier, I don't think people would be so up in arms. (Though the ethical issues would be the same, IMO.)
That's the violation, the betrayal --- we don't mind the power that they have so long as we believe it is being used for our benefit. That the company with whom we have entrusted so much of ourselves and our lives might use that info to punish us, just for funsies, that's frightening.
This study, what they did --- I too find myself doubtful that someone seeing 100 more bummer-type words in a given week would be enough to send anyone off the deep end, regardless of what they effects they claim to have found in the study.
But if, for the sake of argument, we agree that they actual manipulation here was a mere bagatelle, a flea flick, I don't think that lessens the creepiness factor. Because I think what's really making people's skin crawl is that they know they have us so deep in their pockets they can openly hurt us and we won't run away. For all that people talk about how "companies do this all the time" --- no, they don't, really. They don't just try and make their customers sad and depressed for no reason except to see if they can --- they might try it to induce such a state temporarily, in order to lift us up again by showing us how they can solve/remove our sadness ("In the arms of an Angel...."). But most companies would fear alienating their customers --- they wouldn't risk that. Companies that have their customers by the short and curlies such that they negotiate by threatening to cut their customers off rather than by wooing them --- these are, in general, heavily regulated monopolies, precisely because of that fact. The government has rules about what the power company can do to you if you don't pay your bill, because the power company has great power to harm its customers.
Facebook obeys no rules. Facebook apparently feels so immune from the prospect of its customers fleeing it that somebody came up with the idea of deliberately pissing them off to see how they take it and all Facebook thought was, "ooh, sounds interesting, let's do." It reminds me of Kinsley's rule of gaffes; the real problem in politics is always when you accidentally speak the truth. The problem here is that Facebook accidentally let slip what it thinks of us, and what it thinks is that we're its white rats.
posted by Diablevert at 5:11 PM on June 29, 2014 [8 favorites]
The crux of the issue is simply the distinction between "help" and "harm". If they had stuck to manipulating the feeds just in order to make people happier, I don't think people would be so up in arms. (Though the ethical issues would be the same, IMO.)
That's the violation, the betrayal --- we don't mind the power that they have so long as we believe it is being used for our benefit. That the company with whom we have entrusted so much of ourselves and our lives might use that info to punish us, just for funsies, that's frightening.
This study, what they did --- I too find myself doubtful that someone seeing 100 more bummer-type words in a given week would be enough to send anyone off the deep end, regardless of what they effects they claim to have found in the study.
But if, for the sake of argument, we agree that they actual manipulation here was a mere bagatelle, a flea flick, I don't think that lessens the creepiness factor. Because I think what's really making people's skin crawl is that they know they have us so deep in their pockets they can openly hurt us and we won't run away. For all that people talk about how "companies do this all the time" --- no, they don't, really. They don't just try and make their customers sad and depressed for no reason except to see if they can --- they might try it to induce such a state temporarily, in order to lift us up again by showing us how they can solve/remove our sadness ("In the arms of an Angel...."). But most companies would fear alienating their customers --- they wouldn't risk that. Companies that have their customers by the short and curlies such that they negotiate by threatening to cut their customers off rather than by wooing them --- these are, in general, heavily regulated monopolies, precisely because of that fact. The government has rules about what the power company can do to you if you don't pay your bill, because the power company has great power to harm its customers.
Facebook obeys no rules. Facebook apparently feels so immune from the prospect of its customers fleeing it that somebody came up with the idea of deliberately pissing them off to see how they take it and all Facebook thought was, "ooh, sounds interesting, let's do." It reminds me of Kinsley's rule of gaffes; the real problem in politics is always when you accidentally speak the truth. The problem here is that Facebook accidentally let slip what it thinks of us, and what it thinks is that we're its white rats.
posted by Diablevert at 5:11 PM on June 29, 2014 [8 favorites]
> So it turns out, according to Forbes, that the study was not actually run through a university IRB at all, but rather and "internal Facebook review." .... It is all the more astonishing given that there were apparently federal dollars partially funding this study. To accept federal funding and bypass the IRB altogether is a pretty huge ethics violation....
Wow. IF that's true, it also means that they misled/deceived PNAS editor Susan Fiske, who stated that "the authors indicated their university IRB had approved the study." That would be a whole additional layer of irresponsible research conduct.
(Forbes doesn't name a source for this beyond "a person familiar with the matter." It may be wrong. Or perhaps a university is furiously backpedalling about what its IRB ok'd.)
posted by Westringia F. at 5:20 PM on June 29, 2014 [5 favorites]
Wow. IF that's true, it also means that they misled/deceived PNAS editor Susan Fiske, who stated that "the authors indicated their university IRB had approved the study." That would be a whole additional layer of irresponsible research conduct.
(Forbes doesn't name a source for this beyond "a person familiar with the matter." It may be wrong. Or perhaps a university is furiously backpedalling about what its IRB ok'd.)
posted by Westringia F. at 5:20 PM on June 29, 2014 [5 favorites]
(Forbes doesn't name a source for this beyond "a person familiar with the matter." It may be wrong. Or perhaps a university is furiously backpedalling about what its IRB ok'd.)
Kashmir Hill just said on Twitter: "I've reached out to Fiske to ask if there was ambiguity or possible miscommunication. No response yet."
So I guess we'll just have to follow along and see. On the one hand, if a university IRB did approve this study's protocols, it think they did so in error, but I have also seen IRBs take lots of weird and inconsistent positions on Internet-based research. It still doesn't absolve the researchers involved from making problematic ethical choices. But if it is true that they bypassed the IRB altogether, then that just compounds the issue.
This is one of the unfortunate byproducts of the increasing pressure to get grants and the serious publish or perish mentality.
posted by DiscourseMarker at 5:49 PM on June 29, 2014 [1 favorite]
Kashmir Hill just said on Twitter: "I've reached out to Fiske to ask if there was ambiguity or possible miscommunication. No response yet."
So I guess we'll just have to follow along and see. On the one hand, if a university IRB did approve this study's protocols, it think they did so in error, but I have also seen IRBs take lots of weird and inconsistent positions on Internet-based research. It still doesn't absolve the researchers involved from making problematic ethical choices. But if it is true that they bypassed the IRB altogether, then that just compounds the issue.
This is one of the unfortunate byproducts of the increasing pressure to get grants and the serious publish or perish mentality.
posted by DiscourseMarker at 5:49 PM on June 29, 2014 [1 favorite]
Remember, these a respected researchers. First comment says so. I'm sure they did everything by the book.
posted by cjorgensen at 7:51 PM on June 29, 2014 [1 favorite]
posted by cjorgensen at 7:51 PM on June 29, 2014 [1 favorite]
Can I be the first one to say that I don't believe Fiske when she says she didn't realize that it wasn't fully ran by an ethics board? She seems to have a history of attaching her name to media friendly, self-promoting, flawed studies.
I think she thought she'd fly this one under the radar, but got caught with her pants down. Does working for Princeton make you immune from charges of incompetency?
posted by Yowser at 8:18 PM on June 29, 2014 [1 favorite]
I think she thought she'd fly this one under the radar, but got caught with her pants down. Does working for Princeton make you immune from charges of incompetency?
posted by Yowser at 8:18 PM on June 29, 2014 [1 favorite]
So the Forbes article has been updated again. Not sure it actually makes anything clearer:
Yet another update: Professor Susan Fiske, who edited the article in PNAS, says the authors said the data analysis was approved by a Cornell IRB but not the data collection. “Their revision letter said they had Cornell IRB approval as a ‘pre-existing dataset’ presumably from Facebook, who seems to have reviewed it as well in some unspecified way,” writes Fiske by email. Horrible ‘telephone game’ playing here. The Cornell IRB has not yet responded to a media request.posted by DiscourseMarker at 8:31 PM on June 29, 2014
"Furor Erupts Over Facebook's Experiment on Users" (WSJ)
Post by Adam D. I. Kramer:
Post by Adam D. I. Kramer:
The reason we did this research is because we care about the emotional impact of Facebook and the people that use our product. We felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out.posted by Estragon at 8:38 PM on June 29, 2014 [1 favorite]
The study Kramer seems to be referring to was released in September 2013, without any cooperation with Facebook.the study Kramer coauthored was conducted in January 2012. Does Kramer have a time machine he's hiding from us?
posted by Yowser at 8:50 PM on June 29, 2014
posted by Yowser at 8:50 PM on June 29, 2014
I should have provided a cite: www.economist.com/news/science-and-technology/21583593-using-social-network-seems-make-people-more-miserable-get-life .
posted by Yowser at 9:00 PM on June 29, 2014
posted by Yowser at 9:00 PM on June 29, 2014
Does anyone actually post Facebook status updates anymore? Does anyone under 35 actually use Facebook? My Facebook Wall is basically a bunch of links that people have shared. It has replaced MetaFilter for me. As a 40+ adult, I generally try to get to emotionally committed on Facebook, and just post photos of my kids or whatever.
posted by KokuRyu at 9:01 PM on June 29, 2014 [1 favorite]
posted by KokuRyu at 9:01 PM on June 29, 2014 [1 favorite]
My Facebook Wall is basically a bunch of links that people have shared. It has replaced MetaFilter for me.
I'd like to know what links your Facebook friends are sharing that are better than Metafilter, because mine is filled with fairly lame puzzles and "which X are you" quizzes.
posted by JHarris at 12:29 AM on June 30, 2014 [2 favorites]
I'd like to know what links your Facebook friends are sharing that are better than Metafilter, because mine is filled with fairly lame puzzles and "which X are you" quizzes.
posted by JHarris at 12:29 AM on June 30, 2014 [2 favorites]
Horrible ‘telephone game’ playing here.
This is precisely why PNAS has a policy that the IRB that approved research should be mentioned in the text. Why it wasn't followed here I have no idea.
posted by grouse at 5:15 AM on June 30, 2014 [4 favorites]
This is precisely why PNAS has a policy that the IRB that approved research should be mentioned in the text. Why it wasn't followed here I have no idea.
posted by grouse at 5:15 AM on June 30, 2014 [4 favorites]
Like most people I know, I keep switching facebook back to "most recent" feeds (primarily links). Now I wonder how much they fiddle with that.
posted by jeather at 5:29 AM on June 30, 2014
posted by jeather at 5:29 AM on June 30, 2014
"Can we do a study to see why certain people shrug off malevolent behavior on the part of corporations and organizations as "just something that happens"
Who needs a study? The reason why is because the avalanche has already started and it is too late for the pebbles to vote.
There ain't shit I can do personally to stop this kind of thing. I can ignore Facebook on my own for as long as I can get away with that, but I've been told that if I try to change careers, I'll have to start posting on it or else. And that's all I got for options. If the bigger organizations out there won't stop it, if these IRB's everyone is going on about thinks it's okay or whatever (I'm confused now), the avalanche is only gonna get bigger. That's why the pebbles get used to it.
posted by jenfullmoon at 6:25 AM on June 30, 2014
Who needs a study? The reason why is because the avalanche has already started and it is too late for the pebbles to vote.
There ain't shit I can do personally to stop this kind of thing. I can ignore Facebook on my own for as long as I can get away with that, but I've been told that if I try to change careers, I'll have to start posting on it or else. And that's all I got for options. If the bigger organizations out there won't stop it, if these IRB's everyone is going on about thinks it's okay or whatever (I'm confused now), the avalanche is only gonna get bigger. That's why the pebbles get used to it.
posted by jenfullmoon at 6:25 AM on June 30, 2014
FWIW, I had no problem switching to software development without a FB account.
posted by Estragon at 6:33 AM on June 30, 2014
posted by Estragon at 6:33 AM on June 30, 2014
I'd like to know what links your Facebook friends are sharing that are better than Metafilter, because mine is filled with fairly lame puzzles and "which X are you" quizzes.
For instance, people follow NPR, and share the interesting stories, which overlaps a lot with Metafilter.
posted by smackfu at 6:52 AM on June 30, 2014
For instance, people follow NPR, and share the interesting stories, which overlaps a lot with Metafilter.
posted by smackfu at 6:52 AM on June 30, 2014
So why did Facebook permit this to be published at all? It can only have been at the behest of the academics involved. Facebook are malign but not idiots, what good (for Facebook) did they imagine would result from publishing?
posted by epo at 7:35 AM on June 30, 2014 [1 favorite]
posted by epo at 7:35 AM on June 30, 2014 [1 favorite]
jenfullmoon: institutional review board (IRB).
For folks who are curious about the what/when/why/hows of IRBs:
If you are affiliated with a university, you may be able to take the CITI "IRB Basic" training course for free. This is the course that academic researchers in the US typically must pass before working on a project involving human subjects.
posted by Westringia F. at 7:36 AM on June 30, 2014
For folks who are curious about the what/when/why/hows of IRBs:
If you are affiliated with a university, you may be able to take the CITI "IRB Basic" training course for free. This is the course that academic researchers in the US typically must pass before working on a project involving human subjects.
posted by Westringia F. at 7:36 AM on June 30, 2014
> So why did Facebook permit this to be published at all?
Why not? They get collaborators to crunch the data for them, they don't lose anything proprietary in the publishing, and of all the parties involved, they're the least likely to get burned by the blowback. They're not held to the same high standards as academic researchers & publishers, and they have an effective monopoly on social networking. Besides, everyone already thinks they're evil. What the hell do they care?
Peer review -- by the IRB, by the PNAS referees, by the PNAS editorial staff, by people the authors talked with at conferences & over coffee, &c -- should have nipped this problem in the bud. Every one of those checks failed. They failed the study participants, and they failed the broader scientific & FB user communities. But they also failed the authors. The person most likely to suffer the consequences of this debacle -- the 2nd author, who is most junior and thus has no other reputation to fall back on -- is likely also the one who had the least power to control the study. One can argue that she could & should have stopped it, but so too could all those other mechanisms, as was their job. And because all those checks failed, this will dog her in google searches the rest of her career; it'll show up when she applies for jobs, when universities solicit letters for tenure files, &c.
But the consequences for Facebook? Nil. Not a single investor has pulled out over this; not a single advertiser has boycotted the platform in outrage. And my guess is that the number of user eyeballs falling on those ads hasn't appreciably changed as a result of this, either.
posted by Westringia F. at 7:59 AM on June 30, 2014 [1 favorite]
Why not? They get collaborators to crunch the data for them, they don't lose anything proprietary in the publishing, and of all the parties involved, they're the least likely to get burned by the blowback. They're not held to the same high standards as academic researchers & publishers, and they have an effective monopoly on social networking. Besides, everyone already thinks they're evil. What the hell do they care?
Peer review -- by the IRB, by the PNAS referees, by the PNAS editorial staff, by people the authors talked with at conferences & over coffee, &c -- should have nipped this problem in the bud. Every one of those checks failed. They failed the study participants, and they failed the broader scientific & FB user communities. But they also failed the authors. The person most likely to suffer the consequences of this debacle -- the 2nd author, who is most junior and thus has no other reputation to fall back on -- is likely also the one who had the least power to control the study. One can argue that she could & should have stopped it, but so too could all those other mechanisms, as was their job. And because all those checks failed, this will dog her in google searches the rest of her career; it'll show up when she applies for jobs, when universities solicit letters for tenure files, &c.
But the consequences for Facebook? Nil. Not a single investor has pulled out over this; not a single advertiser has boycotted the platform in outrage. And my guess is that the number of user eyeballs falling on those ads hasn't appreciably changed as a result of this, either.
posted by Westringia F. at 7:59 AM on June 30, 2014 [1 favorite]
I'd like to know what links your Facebook friends are sharing that are better than Metafilter, because mine is filled with fairly lame puzzles and "which X are you" quizzes.
I think it helps that I have about 400+ connections (I wouldn't call all of them "friends") on Facebook. Facebook has been great for connecting with people with similar interests and experiences, notably people crossing cultures between Japan and Canada... Actually, most of the people I interact with on Facebook are either Japanese folks or Americans with a deep connection to Japan. So it's easy to immediately discuss an issue, whether it be related to politics or children.
On Facebook, I'm friends with writers, journalists, translators, and other people with shared interests. I also follow certain niche publications.
So my feed is actually pretty relevant to my interests. The weird thing is I don't actually like to share many links, just photos. It's mainly because Facebook for me is this complex Venn diagram of relationships (family vs people my family does not know) so comments on my Wall posts are really weird to read when there is overlap.
The stuff that pops up is immediately interesting, and I don't have to worry, like I would on MetaFilter, about including a preface about a complicated subject (eg, "collective self defense"), people just get it.
But only a few of my connections post actual "status updates." It's such an anachronistic way to look at Facebook.
posted by KokuRyu at 8:00 AM on June 30, 2014
I think it helps that I have about 400+ connections (I wouldn't call all of them "friends") on Facebook. Facebook has been great for connecting with people with similar interests and experiences, notably people crossing cultures between Japan and Canada... Actually, most of the people I interact with on Facebook are either Japanese folks or Americans with a deep connection to Japan. So it's easy to immediately discuss an issue, whether it be related to politics or children.
On Facebook, I'm friends with writers, journalists, translators, and other people with shared interests. I also follow certain niche publications.
So my feed is actually pretty relevant to my interests. The weird thing is I don't actually like to share many links, just photos. It's mainly because Facebook for me is this complex Venn diagram of relationships (family vs people my family does not know) so comments on my Wall posts are really weird to read when there is overlap.
The stuff that pops up is immediately interesting, and I don't have to worry, like I would on MetaFilter, about including a preface about a complicated subject (eg, "collective self defense"), people just get it.
But only a few of my connections post actual "status updates." It's such an anachronistic way to look at Facebook.
posted by KokuRyu at 8:00 AM on June 30, 2014
Westringia: fair enough. FB is too big to be harmed by this, but what was in it for them? Why reveal anything about their research? They can't have imagined favourable coverage would have resulted and this is bad publicity. No one, however big, wants that.
posted by epo at 8:14 AM on June 30, 2014
posted by epo at 8:14 AM on June 30, 2014
what was in it for them? Why reveal anything about their research?
I know a couple of folks associated with the Facebook Data Team. Their motivation is that they are scientists, and want to do research and share it with the world. And they do the work at Facebook because it pays well and because Facebook has a uniquely amazing set of data. And Facebook has enough people with academic backgrounds to realize, as a company, that contributing to the academic community is a good thing. So they let a few folks do work like this and publish it. That team has done a lot of amazing work over the years. There should be a list of publications here but it's broken at the moment. Here's a related list of Facebook publications.
An interesting contrast is Apple, who notoriously locks its employees inside and never lets them talk about their work with any of their colleagues. To the point that Apple employees show up to conferences wearing anonymous badges, and have nervous laughs and polite deflections at the ready when politely asked "hey, what do you work on?". The Apple folks I know all hate that enforced secrecy, but choose to stay because the work is interesting or the pay is good or they believe in the company. Facebook's trying to be a bit more open and sharing than that, and that's a good thing.
So the problem with this blowback about their ethics is that Facebook, the company, is going to look at this and say "there's too much risk in publishing stuff". And they'll keep doing the work but just keep their valuable research methods and results internally. That will be a net loss. And that's a shame. I stick by my assertion the possible harm to subjects in this experiment was small, and that an IRB should have approved a study like this. I'm still mad they did it without appropriate consideration and disclosure, but as academic research sins go it's pretty minor IMHO.
posted by Nelson at 8:37 AM on June 30, 2014 [2 favorites]
I know a couple of folks associated with the Facebook Data Team. Their motivation is that they are scientists, and want to do research and share it with the world. And they do the work at Facebook because it pays well and because Facebook has a uniquely amazing set of data. And Facebook has enough people with academic backgrounds to realize, as a company, that contributing to the academic community is a good thing. So they let a few folks do work like this and publish it. That team has done a lot of amazing work over the years. There should be a list of publications here but it's broken at the moment. Here's a related list of Facebook publications.
An interesting contrast is Apple, who notoriously locks its employees inside and never lets them talk about their work with any of their colleagues. To the point that Apple employees show up to conferences wearing anonymous badges, and have nervous laughs and polite deflections at the ready when politely asked "hey, what do you work on?". The Apple folks I know all hate that enforced secrecy, but choose to stay because the work is interesting or the pay is good or they believe in the company. Facebook's trying to be a bit more open and sharing than that, and that's a good thing.
So the problem with this blowback about their ethics is that Facebook, the company, is going to look at this and say "there's too much risk in publishing stuff". And they'll keep doing the work but just keep their valuable research methods and results internally. That will be a net loss. And that's a shame. I stick by my assertion the possible harm to subjects in this experiment was small, and that an IRB should have approved a study like this. I'm still mad they did it without appropriate consideration and disclosure, but as academic research sins go it's pretty minor IMHO.
posted by Nelson at 8:37 AM on June 30, 2014 [2 favorites]
I agree that the blowback to FB is going to curtail future publications. But I also maintain that what FB did is business as usual on the web, they just made it public which most companies don't do. The A/B method is specifically intended to manipulate behavior and whether it has been used to manipulate emotional behavior before is unknown. The outrage about FB should be more general concern about advertising manipulation on a grand scale - unless one realizes that the use of free social networks means that these networks stay free because of advertising, and the advertising is specifically geared to modify your behavior. When you signup, you agree to their terms. This - the terms of the signup- are an issue for an agency like the new Consumer Protection agency. The terms should be in plain language that anyone can understand in a minimum time.
posted by bluesky43 at 8:54 AM on June 30, 2014 [1 favorite]
posted by bluesky43 at 8:54 AM on June 30, 2014 [1 favorite]
I'm not convinced this'll curtail future publications. FB's not having to defend itself half as much as PNAS is; I wouldn't be surprised if the FB PR folks had a perfectly lovely weekend. And despite the bad publicity from this, there's still a lot to be gained by partnering with academia & publishing generally:
First, they get research results that may help their business (eg) without paying the salaries of their academic collaborators. No academic would collaborate if s/he couldn't publish. Publishing -- and the risk of some occasional blowback -- is a damn good deal for getting bright people to work with you for free.
Second, they want to employ smart, curious people, and smart, curious people like to work interesting research questions and gain recognition for their work, ie, publish. Consider what PARC or Bell Labs did in their heyday; both allowed their employees to publish & interact with the scientific community.
Third, external researchers are going to mine FB data in whatever way they can -- it's too rich a dataset to be ignored! FB can either try to shut them out, or work with them and have some input & control (not to mention money, by charging for data access).
Fourth, despite the negative PR from this paper, they also get some positive PR from partnering with academia: "we share openly", "we're helping to advance knowledge", "we got this cool result", &c.
I'm not convinced this risks of publishing are bad enough for them to sacrifice those things. It's only one publication out of many so far, and the consequences to them have been minor. So my guess is that they'll keep publishing, but they'll find some way to ensure that the blowback falls as much on their academic collaborators as possible. They're already most of the way there.
posted by Westringia F. at 9:20 AM on June 30, 2014 [1 favorite]
First, they get research results that may help their business (eg) without paying the salaries of their academic collaborators. No academic would collaborate if s/he couldn't publish. Publishing -- and the risk of some occasional blowback -- is a damn good deal for getting bright people to work with you for free.
Second, they want to employ smart, curious people, and smart, curious people like to work interesting research questions and gain recognition for their work, ie, publish. Consider what PARC or Bell Labs did in their heyday; both allowed their employees to publish & interact with the scientific community.
Third, external researchers are going to mine FB data in whatever way they can -- it's too rich a dataset to be ignored! FB can either try to shut them out, or work with them and have some input & control (not to mention money, by charging for data access).
Fourth, despite the negative PR from this paper, they also get some positive PR from partnering with academia: "we share openly", "we're helping to advance knowledge", "we got this cool result", &c.
I'm not convinced this risks of publishing are bad enough for them to sacrifice those things. It's only one publication out of many so far, and the consequences to them have been minor. So my guess is that they'll keep publishing, but they'll find some way to ensure that the blowback falls as much on their academic collaborators as possible. They're already most of the way there.
posted by Westringia F. at 9:20 AM on June 30, 2014 [1 favorite]
So what is to be done? Facebook already filters interpersonal communication, there's nothing "natural" about a News Feed or Timeline. I'm just stating a fact. I guess you can scold me for stating this fact...
Yeah, no. You aren't just stating a fact, you are arguing that the fact that Facebook uses deception to manipulate its users is not an example of unethical behavior, or is somehow an immaterial example of unethical behavior, and therefore academic and scientific professionals and institutions collaborating in manipulating social network application users in the interest of acquiring value from them has no ethical dimension and no material impact on society.
As I've said above, I've kept away from Facebook with a 100-foot pole and never had an account and hence haven't participated in much discussion about it, but if no one has ever said that Facebook manipulating and trying to control the behavior of its users is unethical I'll eat my shorts.
If in a business relationship between two one-percenters one party behaved towards the other the way that Facebook behaves towards one of its users, there's no question in my mind that the manipulating party would be treated as having engaged in not merely unethical behavior but even criminal behavior: it's only because the value they're extracting from average members of the public is seen as something that's supposed to be free to a company that wants to profit from it, as opposed to something that would be construed as having substantial quantifiable value by a one-percenter's lawyers and accountants, that they can get away with wresting it from people in an underhanded fashion.
So I guess we can avoid a brave new world by banning academic collaboration with Facebook, and then people will go back to grumbling on Facebook about how much they hate Facebook. Problem solved.
No, the underlying problem is people that pretend this sort of stuff is pedestrian and unremarkable and demands no action, just like all of the people who "totally knew" the NSA was gathering and integrating diverse sources of corporate surveillance, unconstrained by even the paltry limits we place on what corporations can do with surveillance output, even though top-flight academic researchers in intelligence fields say they didn't know that level of NSA activity was happening before Snowden's revelations.
As long as enough of us are willing to pull a "see no evil/hear no evil/speak no evil" act in exchange for a sum of 60 guilders in beads and trinkets, the utility of the Facebook application or some crumbs of research data, we're on track for the world you're pretending already exists now and/or are trying to imply is a distant dystopian figment of the imagination—where there's unbridled alteration and filtering of the information coming out of every communication and data source—to become the actual real world pretty shortly.
What's happening here is that the value companies are expected to be allowed to extract from the average person for free is being pushed up to greater and greater and greater levels, to the point where the pound of flesh they take can include all of the positive parts of your friends' and relatives' lives, and because academics and scientific researchers are allowed to skim some of that value off they're keeping a straight face while saying "Wha... of course you're allowed to take that from random people for free without even so much as informed consent!"
posted by XMLicious at 9:42 AM on June 30, 2014 [6 favorites]
My op-ed (written with Art Caplan) on the subject is now available.
posted by cgs06 at 10:38 AM on June 30, 2014 [3 favorites]
posted by cgs06 at 10:38 AM on June 30, 2014 [3 favorites]
Standard reminder: if something is free to you, you are not the customer. You are the product.
Holy shit, can we like rig up a boot on a robot arm to everyones computer that just kicks them in the shin whenever they post this? It's not clever, or insightful, or informative anymore. Everyones heard it, it was witty the first 5 times. Now it just sounds like smug internet militant atheist kind of ego stroking garbage.
I realize this might just be some weird pet peeve of mine that i find it so tiresome, but it shows up in EVERY thread and gets favorites over and over and over. Is it a meme? Did i miss something?
posted by emptythought at 12:58 PM on June 30, 2014 [6 favorites]
Holy shit, can we like rig up a boot on a robot arm to everyones computer that just kicks them in the shin whenever they post this? It's not clever, or insightful, or informative anymore. Everyones heard it, it was witty the first 5 times. Now it just sounds like smug internet militant atheist kind of ego stroking garbage.
I realize this might just be some weird pet peeve of mine that i find it so tiresome, but it shows up in EVERY thread and gets favorites over and over and over. Is it a meme? Did i miss something?
posted by emptythought at 12:58 PM on June 30, 2014 [6 favorites]
Is it a meme?
It's stated on one of the official MetaFilter T shirts, so it is, at least a little. I do like your robot shin-kicker idea, though.
posted by grouse at 1:25 PM on June 30, 2014
It's stated on one of the official MetaFilter T shirts, so it is, at least a little. I do like your robot shin-kicker idea, though.
posted by grouse at 1:25 PM on June 30, 2014
I'm a Facebook user with a diagnosed mood disorder. Can't wait to sign up for the class-action lawsuit(s), because this is seriously 1000% crap behavior on the part of these so-called scientists. Especially because it took me about nine minutes to sketch out a study design that would have answered this question without directly mucking around with peoples' brains.
I also expect my health insurance company, which spent quite a bit of money on drugs and therapy for me that month, would like its pound of flesh. Come to think of it, I wonder how many Facebook users are on Medicaid/Medicare/Tricare - the feds and the states combined spent far more on mental health services in a week than everyone else in America put together, and even a tenth of a percent of that money would be a considerable sum. I say they get it all back. With interest.
I'm not nearly as good as the staff at the US Department of Justice at coming up at creative ways of deciding that they can go after an absurd amount of money. My guess is US government agencies (including states and local governments) spent at least $2 billion on mental health in a given week, so that's $2,000,000 to start with (per week;) assuming Facebook can't prove that these several hundred thousand users were or weren't using US public health services, it's reasonable to assume that at least a quarter of them were, given the number of people who are enrolled in Medicare/Medicaid/etc. and likelihood of any one Facebook English-language poster being in the US, and that about half of those people used some kind of mental health services during the course of, say, the next six months... My estimate is $6,500,000 that would be reasonably easy to argue for. And when you piss off the feds, they often pull out "triple damages" compensation type stuff on top of their principle-of-the-thing penalties....
posted by Fee Phi Faux Phumb I Smell t'Socks o' a Puppetman! at 3:18 PM on June 30, 2014
I also expect my health insurance company, which spent quite a bit of money on drugs and therapy for me that month, would like its pound of flesh. Come to think of it, I wonder how many Facebook users are on Medicaid/Medicare/Tricare - the feds and the states combined spent far more on mental health services in a week than everyone else in America put together, and even a tenth of a percent of that money would be a considerable sum. I say they get it all back. With interest.
I'm not nearly as good as the staff at the US Department of Justice at coming up at creative ways of deciding that they can go after an absurd amount of money. My guess is US government agencies (including states and local governments) spent at least $2 billion on mental health in a given week, so that's $2,000,000 to start with (per week;) assuming Facebook can't prove that these several hundred thousand users were or weren't using US public health services, it's reasonable to assume that at least a quarter of them were, given the number of people who are enrolled in Medicare/Medicaid/etc. and likelihood of any one Facebook English-language poster being in the US, and that about half of those people used some kind of mental health services during the course of, say, the next six months... My estimate is $6,500,000 that would be reasonably easy to argue for. And when you piss off the feds, they often pull out "triple damages" compensation type stuff on top of their principle-of-the-thing penalties....
posted by Fee Phi Faux Phumb I Smell t'Socks o' a Puppetman! at 3:18 PM on June 30, 2014
So why did Facebook permit this to be published at all? It can only have been at the behest of the academics involved. Facebook are malign but not idiots, what good (for Facebook) did they imagine would result from publishing?
I think you've asked the key question epo, and I certainly do not know the answer.
But I know what I'm afraid the answer is.
The 2014 midterms are upon us, and Republicans are very well aware that they are at a decisive demographic disadvantage (Cantor's primary defeat has surely convinced any doubters that the slightest gesture in the direction of reaching out to Hispanic voters can be political suicide), and all that's left to them are various forms of voter suppression -- voter ID laws, dodgy polling location changes, eliminating early voting, Florida 2000 style purges of the voter rolls, direct voter intimidation at the polls, understaffed polling stations in heavily Democratic districts, problems with forms and voting machines, and etc.
But there's also the fact that Democratic voters are less likely to vote in the first place, and less likely to vote if they are discouraged about the political process, and discouraged about things in general-- and I would say Facebook could easily tell which of its account holders are more likely to vote Democratic if they do vote.
What is there that would stop rich Republican activists from paying Facebook to load the feeds of identified likely Democratic voters in swing states and contested districts with negative and discouraging terms and items? Would that be illegal somehow?
I have no idea, but from that point of view, this study looks like proof of concept.
posted by jamjam at 4:06 PM on June 30, 2014 [1 favorite]
I think you've asked the key question epo, and I certainly do not know the answer.
But I know what I'm afraid the answer is.
The 2014 midterms are upon us, and Republicans are very well aware that they are at a decisive demographic disadvantage (Cantor's primary defeat has surely convinced any doubters that the slightest gesture in the direction of reaching out to Hispanic voters can be political suicide), and all that's left to them are various forms of voter suppression -- voter ID laws, dodgy polling location changes, eliminating early voting, Florida 2000 style purges of the voter rolls, direct voter intimidation at the polls, understaffed polling stations in heavily Democratic districts, problems with forms and voting machines, and etc.
But there's also the fact that Democratic voters are less likely to vote in the first place, and less likely to vote if they are discouraged about the political process, and discouraged about things in general-- and I would say Facebook could easily tell which of its account holders are more likely to vote Democratic if they do vote.
What is there that would stop rich Republican activists from paying Facebook to load the feeds of identified likely Democratic voters in swing states and contested districts with negative and discouraging terms and items? Would that be illegal somehow?
I have no idea, but from that point of view, this study looks like proof of concept.
posted by jamjam at 4:06 PM on June 30, 2014 [1 favorite]
Additionally, holy shit do i think it's hilarious anyone thinks that facebook should, or would even have to pay money over this.
How hard would it be for them to argue these tests simply coincided with their sectional/structured feature testing methods, and that these results were simply drawn from that. And that what shows up on your timeline is governed by trade secret proprietary algorithms. Except additional fielding of the "you couldn't show all 600 of your friends posts in a meaningful way without it taking longer to read than the speed at which they're posted and never catching up" which has been brought up even on the green and the blue before with easy show-your-work numbers.
The way it usually picks what to show you is a total black box that seems to be based on things up to and as creepy as location data that got in close proximity with that person. This is that black box + looking for some intrinsic qualities... with some sort of black box, proprietary algorithm.
So they tested algorithms that sorted the posts differently and showed some people only negative posts, and some people only positive posts.
I think as far as some sort of research goes this should be thrown and or laughed out, and was not conducted in an ethical manner. I think people are selling it a lot larger than it is when they're calling it some kind of evil emotional manipulation/damages that they should be fined for.
Conducting this research is bad. But they've been manipulating what you see and why for years. If they're going to start getting drilled for that, then where is the line on what is and isn't ok? It's not just as simple as "well don't do research projects like this" since very complicated manipulation of users feeds has been going on for at least 4 years, if not since way earlier.
posted by emptythought at 4:40 PM on June 30, 2014
How hard would it be for them to argue these tests simply coincided with their sectional/structured feature testing methods, and that these results were simply drawn from that. And that what shows up on your timeline is governed by trade secret proprietary algorithms. Except additional fielding of the "you couldn't show all 600 of your friends posts in a meaningful way without it taking longer to read than the speed at which they're posted and never catching up" which has been brought up even on the green and the blue before with easy show-your-work numbers.
The way it usually picks what to show you is a total black box that seems to be based on things up to and as creepy as location data that got in close proximity with that person. This is that black box + looking for some intrinsic qualities... with some sort of black box, proprietary algorithm.
So they tested algorithms that sorted the posts differently and showed some people only negative posts, and some people only positive posts.
I think as far as some sort of research goes this should be thrown and or laughed out, and was not conducted in an ethical manner. I think people are selling it a lot larger than it is when they're calling it some kind of evil emotional manipulation/damages that they should be fined for.
Conducting this research is bad. But they've been manipulating what you see and why for years. If they're going to start getting drilled for that, then where is the line on what is and isn't ok? It's not just as simple as "well don't do research projects like this" since very complicated manipulation of users feeds has been going on for at least 4 years, if not since way earlier.
posted by emptythought at 4:40 PM on June 30, 2014
And to be clear, i realize that most of what i've said has already been said in here. And that most people aren't on board since they go "well that stuff was wrong too".
So why is this so different? Is it just that it's the perfect little pill of outragebait? This is nothing new, it's like AT&T or comcast doing some new shitty thing. They've been doing shitty things for years. Is this the straw that broke the camels back?
posted by emptythought at 4:42 PM on June 30, 2014
So why is this so different? Is it just that it's the perfect little pill of outragebait? This is nothing new, it's like AT&T or comcast doing some new shitty thing. They've been doing shitty things for years. Is this the straw that broke the camels back?
posted by emptythought at 4:42 PM on June 30, 2014
It's like you missed all the comments and links about why a psych test done under the auspices of an IRB and academic institution, for publication in a peer-reviewed scientific journal, is and should be seen differently from a commercial entity's A/B tests.
At least read iamkimiam's comment.
posted by rtha at 5:05 PM on June 30, 2014
At least read iamkimiam's comment.
posted by rtha at 5:05 PM on June 30, 2014
I think people are selling it a lot larger than it is when they're calling it some kind of evil emotional manipulation/damages that they should be fined for.
So why is this so different? Is it just that it's the perfect little pill of outragebait?
I think that depends on where you work - Industry, or the Academy. In Industry it might well be okay, in the Academy it is not. This is complicated by the fact that some Academy folks lust after Industry's 'Big Data,' while some Industry folks like to think they are also serious Academics.
Here it's the interaction between the two that is problematic. It's probably short-term localized PR damage for FB, possibly longer term damage for Cornell.
The issue as I see it lies in the data hand-off between FB and Cornell. Basically Cornell is saying that "We did not collect the data, so we did not have to monitor how it was collected."
This is actually quite a common practice with respect to IRB protocols and data. However it is usually applied to data that requires de-identification and anonymization. That is, Industry Group A collects data that is not anonymous (e.g. cell phone account data) and would in an academic setting require informed consent. As they are Industry, their 'informed consent' is inadequate from the point of view of the Academy. However an Industry-Academy partnership can deal with this by having Industry publish an anonymized data set that Academics can then work on. It's a form of hand-washing and plausible deniability that is a kind of 'workaround' for this type of data analysis.
This example is somewhat different. The data is not just being anonymized, but has also been collected in a way that Cornell IRB maybe would have defined as unethical - that is, manipulating subjects' emotional states without telling the research subjects (a) that they have been recruited, (b) that they have been manipulated, (c) what the research is about, (d) with any option for subjects to opt out of the research if they feel uncomfortable, etc.
So among the questions here are, did Cornell IRB approve a study that they knew was based on data collected that was collected in an unsatisfactory fashion? And further, if they did know that, did Cornell IRB think that the typical 'hand washing' tactic was enough to distance them from the original (perceived as unethical) data collection?
If the consensus among IRB audit folks tilts the wrong way for Cornell, then basically, this has the potential to shut down ALL federally funded research at Cornell. I'm assuming that their IRB process is audited by CITI, and that CITI know about this. And I'm sure that there are some worried people at Cornell at this moment.
And yeah corporations do this all the time. However they don't normally publish their research with academics, in academic journals.
posted by carter at 5:11 PM on June 30, 2014 [1 favorite]
So why is this so different? Is it just that it's the perfect little pill of outragebait?
I think that depends on where you work - Industry, or the Academy. In Industry it might well be okay, in the Academy it is not. This is complicated by the fact that some Academy folks lust after Industry's 'Big Data,' while some Industry folks like to think they are also serious Academics.
Here it's the interaction between the two that is problematic. It's probably short-term localized PR damage for FB, possibly longer term damage for Cornell.
The issue as I see it lies in the data hand-off between FB and Cornell. Basically Cornell is saying that "We did not collect the data, so we did not have to monitor how it was collected."
This is actually quite a common practice with respect to IRB protocols and data. However it is usually applied to data that requires de-identification and anonymization. That is, Industry Group A collects data that is not anonymous (e.g. cell phone account data) and would in an academic setting require informed consent. As they are Industry, their 'informed consent' is inadequate from the point of view of the Academy. However an Industry-Academy partnership can deal with this by having Industry publish an anonymized data set that Academics can then work on. It's a form of hand-washing and plausible deniability that is a kind of 'workaround' for this type of data analysis.
This example is somewhat different. The data is not just being anonymized, but has also been collected in a way that Cornell IRB maybe would have defined as unethical - that is, manipulating subjects' emotional states without telling the research subjects (a) that they have been recruited, (b) that they have been manipulated, (c) what the research is about, (d) with any option for subjects to opt out of the research if they feel uncomfortable, etc.
So among the questions here are, did Cornell IRB approve a study that they knew was based on data collected that was collected in an unsatisfactory fashion? And further, if they did know that, did Cornell IRB think that the typical 'hand washing' tactic was enough to distance them from the original (perceived as unethical) data collection?
If the consensus among IRB audit folks tilts the wrong way for Cornell, then basically, this has the potential to shut down ALL federally funded research at Cornell. I'm assuming that their IRB process is audited by CITI, and that CITI know about this. And I'm sure that there are some worried people at Cornell at this moment.
And yeah corporations do this all the time. However they don't normally publish their research with academics, in academic journals.
posted by carter at 5:11 PM on June 30, 2014 [1 favorite]
Well. So it wasn't even part of their no-one-reads-it-anyway ToS at the time, and there was no age filter (so under-18s were included, probably?), and the path through the IRB process is less than clear?
posted by rtha at 9:13 AM on July 1, 2014 [5 favorites]
posted by rtha at 9:13 AM on July 1, 2014 [5 favorites]
I feel bad for the poor postdoc that is going to get fucked over by other people's lack of ethics.
posted by Elementary Penguin at 9:35 AM on July 1, 2014
posted by Elementary Penguin at 9:35 AM on July 1, 2014
Everyone is going to be distancing themselves from this. I'm guessing some careers are done.
posted by cjorgensen at 12:06 PM on July 1, 2014
posted by cjorgensen at 12:06 PM on July 1, 2014
Arrgh, it frustrates me so much that even that latest Forbes Kashmir Hill link, when comparing this to A/B testing, doesn't seem to grok that applying any related technique to interpersonal communication is quite a step further than applying it to a marketing message or even a news story. jamjam's scenario, where you replace "emotionally positive" and "emotionally negative" with "politically left-leaning" and "politically right-leaning" and the objective is to alter your political inclinations by fiddling with your perception of friends' and relatives' political inclinations, is a really succinct example of where this leads.
posted by XMLicious at 12:10 PM on July 1, 2014
posted by XMLicious at 12:10 PM on July 1, 2014
Ok, previously i was like "why is everyone crapping on facebook when they aren't even the researchers?", but now they deserve to get sued.
I hope they actually do. I hope this goes to the supreme court and sets up some sort of win for privacy rights and this sort of crap.
Or, who am i kidding, they'll win because they had the "we can change this at any time" language in there, and they put it in and somewhere i bet it says it's retroactively effective.
The best thing that could come out of this is some kind of serious blow against clickthrough ToS' and license agreements honestly. I'll be shocked if facebook doesn't essentially get away with this, though.
posted by emptythought at 12:32 PM on July 1, 2014
I hope they actually do. I hope this goes to the supreme court and sets up some sort of win for privacy rights and this sort of crap.
Or, who am i kidding, they'll win because they had the "we can change this at any time" language in there, and they put it in and somewhere i bet it says it's retroactively effective.
The best thing that could come out of this is some kind of serious blow against clickthrough ToS' and license agreements honestly. I'll be shocked if facebook doesn't essentially get away with this, though.
posted by emptythought at 12:32 PM on July 1, 2014
I asked earlier if A researcher had a time machine. Looks like Facebook has one too. When are we going to stop letting Silicon Valley break laws for fun and profit? Facebook, Amazon(taxes), Uber(taxi laws), and thousands of sleazy companies should face more than slaps on the wrist.
If they're truly people, then they should be forces to cease operations when put in corporate jail, just like all other people.
posted by Yowser at 12:50 PM on July 1, 2014 [1 favorite]
If they're truly people, then they should be forces to cease operations when put in corporate jail, just like all other people.
posted by Yowser at 12:50 PM on July 1, 2014 [1 favorite]
I would say that the problem goes much deeper than whether or not laws against things like this exist. The basic idea in our system is that injustices are redressed by injured parties bringing suit in court and only at that point is there any formal evaluation of whether given actions are legal or illegal. What's sort of the opposite approach, of the government or another watchful central authority or regulatory body observing activities within society and proactively preventing or punishing illegal ones, during the Cold War and the rest of the 20th century has been successfully branded as a nebulous notion of "Communism" and as something that should be feared and opposed on first principles.
So things like this - fraud, lawlessness, and injustice against average people by much more wealthy and powerful people and entities on a scale small enough that it can't feasibly be litigated against by most individuals - our system is essentially set up to permit and even encourage. The continuous undermining of unions, class action lawsuits, and other forms of collective action just exacerbates that state of affairs.
Marginal advances like Supreme Court wins and stronger privacy rights under the existing system, if those wealthy and powerful actors in society are allowed to continue to accumulate and consolidate massive asymmetric advantages over individual members of the public, don't seem like enough to me to derail or even balance out the worst trends we face. We need to demand a new paradigm, or at the very least bite the bullet and kick to the curb and replace the bits of our civilization that are dilapidated and reeling out of control rather than patching them up and muddling on, and not dump all of our energy and effort into delivering electoral and political gains to one out of two party/tribe/factions who swap identities every century or every few decades anyways.
posted by XMLicious at 4:19 PM on July 1, 2014 [2 favorites]
So things like this - fraud, lawlessness, and injustice against average people by much more wealthy and powerful people and entities on a scale small enough that it can't feasibly be litigated against by most individuals - our system is essentially set up to permit and even encourage. The continuous undermining of unions, class action lawsuits, and other forms of collective action just exacerbates that state of affairs.
Marginal advances like Supreme Court wins and stronger privacy rights under the existing system, if those wealthy and powerful actors in society are allowed to continue to accumulate and consolidate massive asymmetric advantages over individual members of the public, don't seem like enough to me to derail or even balance out the worst trends we face. We need to demand a new paradigm, or at the very least bite the bullet and kick to the curb and replace the bits of our civilization that are dilapidated and reeling out of control rather than patching them up and muddling on, and not dump all of our energy and effort into delivering electoral and political gains to one out of two party/tribe/factions who swap identities every century or every few decades anyways.
posted by XMLicious at 4:19 PM on July 1, 2014 [2 favorites]
If they're truly people, then they should be forces to cease operations when put in corporate jail, just like all other people.
Or put them to death when they commit acts of terrorism and murder? If BP were a person do you think he/she would still be alive after the gulf "accident" or do you think he would be on death row for negligent homicide and acts of terror?
posted by cjorgensen at 4:27 PM on July 1, 2014
Or put them to death when they commit acts of terrorism and murder? If BP were a person do you think he/she would still be alive after the gulf "accident" or do you think he would be on death row for negligent homicide and acts of terror?
posted by cjorgensen at 4:27 PM on July 1, 2014
As and it turns out Adam Kramer was coordinator of a so-called Human Research pool at U of Oregon. http://t.co/byMRdrmEbG
(Search for Adam Kramer) And psychweb.uoregon.edu/undergraduates/humansubs .
3% increase in course grades is basically mandatory for anyone with the least bit of ambition. So Adam Kramer has gone from doing basically mandatory research to completely secret and unconsenting (so)research. Let me do everyone a favour and Godwin this thread, it somehow seems appropriate for Mr. Kramer .
I wonder who he and Ms. Fiske will trick into taking the fall.
posted by Yowser at 5:33 PM on July 1, 2014
(Search for Adam Kramer) And psychweb.uoregon.edu/undergraduates/humansubs .
3% increase in course grades is basically mandatory for anyone with the least bit of ambition. So Adam Kramer has gone from doing basically mandatory research to completely secret and unconsenting (so)research. Let me do everyone a favour and Godwin this thread, it somehow seems appropriate for Mr. Kramer .
I wonder who he and Ms. Fiske will trick into taking the fall.
posted by Yowser at 5:33 PM on July 1, 2014
And in the spirit of godwining, I said something stupid. Apparently Human Subject Pools for extra credit are a Thing at many Universities.
A deeply, deeply stupid, problematic thing, but a thing nonetheless.
posted by Yowser at 5:40 PM on July 1, 2014
A deeply, deeply stupid, problematic thing, but a thing nonetheless.
posted by Yowser at 5:40 PM on July 1, 2014
Unethical researcher gives hand waiving response, says, "No big whoop."
posted by cjorgensen at 8:20 AM on July 2, 2014 [1 favorite]
posted by cjorgensen at 8:20 AM on July 2, 2014 [1 favorite]
danah boyd weighs in. I like how she offers structural solutions; I wish they were really feasible.
(I attended a talk recently that was like "big data person explains big data to information professionals" and he devoted one slide to privacy and related ethical concerns, including the priceless bullet point "general creepiness." No empathy, just a moment of lip service. I learned later that this person has many strategies in his personal life for opting out of big data and thinks other people should too. How frustrating on several different levels.)
posted by clavicle at 8:37 AM on July 2, 2014
(I attended a talk recently that was like "big data person explains big data to information professionals" and he devoted one slide to privacy and related ethical concerns, including the priceless bullet point "general creepiness." No empathy, just a moment of lip service. I learned later that this person has many strategies in his personal life for opting out of big data and thinks other people should too. How frustrating on several different levels.)
posted by clavicle at 8:37 AM on July 2, 2014
The British data watchdog is investigating whether Facebook Inc violated data-protection laws when it allowed researchers to conduct a psychological experiment on its users.
"We're aware of this issue and will be speaking to Facebook, as well as liaising with the Irish data protection authority, to learn more about the circumstances," the Information Commissioner's Office (ICO) spokesman Greg Jones said in an email.
Jones said it was too early to tell exactly what part of the law Facebook may have infringed. The company's European headquarters is in Ireland.
The Commissioner's Office monitors how personal data is used and has the power to force organizations to change their policies and can levy fines of up to 500,000 pounds ($839,500).
Facebook said it would work with regulators and was changing the way it handled such cases.
"It's clear that people were upset by this study and we take responsibility for it," Facebook spokesman Matt Steinfeld said in an email.
"The study was done with appropriate protections for people's information and we are happy to answer any questions regulators may have."
The Financial Times first reported the probe.
posted by infini at 3:13 PM on July 2, 2014 [3 favorites]
"We're aware of this issue and will be speaking to Facebook, as well as liaising with the Irish data protection authority, to learn more about the circumstances," the Information Commissioner's Office (ICO) spokesman Greg Jones said in an email.
Jones said it was too early to tell exactly what part of the law Facebook may have infringed. The company's European headquarters is in Ireland.
The Commissioner's Office monitors how personal data is used and has the power to force organizations to change their policies and can levy fines of up to 500,000 pounds ($839,500).
Facebook said it would work with regulators and was changing the way it handled such cases.
"It's clear that people were upset by this study and we take responsibility for it," Facebook spokesman Matt Steinfeld said in an email.
"The study was done with appropriate protections for people's information and we are happy to answer any questions regulators may have."
The Financial Times first reported the probe.
posted by infini at 3:13 PM on July 2, 2014 [3 favorites]
Yes, I saw this, the quote from the spokesman "The study was done with appropriate protections for people's information and we are happy to answer any questions regulators may have" completely misses the point. I'm not sure if the are being stupid, or being devious.
posted by carter at 4:39 PM on July 2, 2014
posted by carter at 4:39 PM on July 2, 2014
Facebook's Sandberg: We 'really regret' our secret-test misfire. I guess this issue has spiraled up from academic tut-tutting to full blown PR problem the COO has to answer for. Weirdly Sandberg's comments are to a New Delhi TV station (she happens to be there), and her main talking point is "what really matters here is that we take people's privacy incredibly seriously." I wonder if she's been briefed sufficiently to understand privacy isn't even the issue this time?
Most of my social media researcher friends don't think this is a big deal. danah boyd's editorial is good reading, and her criticisms of the value of the IRB process match what most of the folks I know in the field think.
posted by Nelson at 5:00 PM on July 2, 2014
Most of my social media researcher friends don't think this is a big deal. danah boyd's editorial is good reading, and her criticisms of the value of the IRB process match what most of the folks I know in the field think.
posted by Nelson at 5:00 PM on July 2, 2014
HuffPo weighs in.
CNN weighs in (sort of).
CNN again.
posted by cjorgensen at 5:28 PM on July 2, 2014
CNN weighs in (sort of).
CNN again.
posted by cjorgensen at 5:28 PM on July 2, 2014
Here's a quote I can't quite parse:
posted by XMLicious at 8:43 PM on July 2, 2014
Dr. Nita Farahany, director of Duke University's master program in bioethics and science policy, points out that Facebook's experiment may not qualify as human subject research. "Facebook’s 'research' falls into a relatively new and under-theorized category of threats to individual autonomy," Farahany told Ars. "Advertisers attempt to manipulate human emotions through advertisements, A/B testing, subtle changes in wording all the time. And they attempt to measure the effects of those differences. Is this different?"Why would experimentation involving threatening individual autonomy not count as human subject research?
posted by XMLicious at 8:43 PM on July 2, 2014
I think I can parse it, but don't agree with it or think he really thought it through.
Devil's advocate:
Sites do A/B testing all the time. Hell, I do this. I build two landing pages that are nearly identical. I have my CMS serve up A page half the time and B half the time. The only difference between the pages is ad placement. So did page A or B get more click-throughs? Did page A or B have a less bounce rate? Which page encouraged people to stay on the site? Etc.
I made the choice of what you see. I looked at the data and decided which page you would see after my testing is done. There was no, "Which page do you want to see?" I took away your ability to be autonomous. This is like what Facebook does with their "Top Stories." That algorithm is probably tweaked all the time. Do people stay on the site longer when they get news in chronological order or when we give them things we think they will find interesting? A/B testing.
So, Do people like the site more when they read negative news or when they see happy news? The only way to know is to test.
Advertising does this. They run one commercial in one market and another in another and see how each did, then they take the successful one—the one that makes you happy and think positive things about their brand and buy their product—nationally.
Obama did this to huge effect on their "big data" email campaigns. They sent out the exact same email to many focused groups of people with different subject lines. They they monitored donations and responses from those emails. The one with the best result got sent out nationally. If I remember correctly the best subject line was simply, "hey."
Please don't make me defend the above. It's not how I feel on this issue at all, but I think that's a fairly accurate representation of the argument being made. (At least it's an honest attempt.)
posted by cjorgensen at 7:42 AM on July 3, 2014
Devil's advocate:
Sites do A/B testing all the time. Hell, I do this. I build two landing pages that are nearly identical. I have my CMS serve up A page half the time and B half the time. The only difference between the pages is ad placement. So did page A or B get more click-throughs? Did page A or B have a less bounce rate? Which page encouraged people to stay on the site? Etc.
I made the choice of what you see. I looked at the data and decided which page you would see after my testing is done. There was no, "Which page do you want to see?" I took away your ability to be autonomous. This is like what Facebook does with their "Top Stories." That algorithm is probably tweaked all the time. Do people stay on the site longer when they get news in chronological order or when we give them things we think they will find interesting? A/B testing.
So, Do people like the site more when they read negative news or when they see happy news? The only way to know is to test.
Advertising does this. They run one commercial in one market and another in another and see how each did, then they take the successful one—the one that makes you happy and think positive things about their brand and buy their product—nationally.
Obama did this to huge effect on their "big data" email campaigns. They sent out the exact same email to many focused groups of people with different subject lines. They they monitored donations and responses from those emails. The one with the best result got sent out nationally. If I remember correctly the best subject line was simply, "hey."
Please don't make me defend the above. It's not how I feel on this issue at all, but I think that's a fairly accurate representation of the argument being made. (At least it's an honest attempt.)
posted by cjorgensen at 7:42 AM on July 3, 2014
I think the difference is that with basic A/B testing, you (general you) want to see which thing gets more responses - do more people click the blue one or the green one? With what these folks did, they wanted to see quite specifically what kind of response it will get - and, further, if they could manipulate that response beyond "clicked the blue one more" to "more people got sad when they clicked the blue one."
Another difference is that the researchers wanted it to be human subject research in a way that would get it taken seriously as academic research; if they hadn't, they wouldn't have submitted it to a peer-reviewed journal, not to mention the IRB process.
I don't think you can have it both ways (yet, anyway) - it can be marketing research, in which case you don't get to submit it for peer review, but you also don't have to do the IRB dance, or it can be Serious Academic Research, in which case there are pretty strict protocols you are supposed to follow, like getting informed consent from your subjects.
posted by rtha at 8:16 AM on July 3, 2014 [2 favorites]
Another difference is that the researchers wanted it to be human subject research in a way that would get it taken seriously as academic research; if they hadn't, they wouldn't have submitted it to a peer-reviewed journal, not to mention the IRB process.
I don't think you can have it both ways (yet, anyway) - it can be marketing research, in which case you don't get to submit it for peer review, but you also don't have to do the IRB dance, or it can be Serious Academic Research, in which case there are pretty strict protocols you are supposed to follow, like getting informed consent from your subjects.
posted by rtha at 8:16 AM on July 3, 2014 [2 favorites]
This isn't A/B testing of your perception of a company's product or of their marketing message, this is A/B testing of your perception of your friends and family and your friends and family's lives, A/B testing of reality.
The absurd thing about this whole thing is that the "reality" that this is all based on in is usually just a cherry-picked and curated universe where people live artificially-perfect lives, perpetuating "facebook envy."
If they want to manipulate someone into feeling sad I honestly have no idea if that means they show them the news that Uncle Torgo died or they show them the news that their BFF just bought a new sports car to celebrate a huge promotion.
posted by Room 641-A at 8:54 AM on July 3, 2014
The absurd thing about this whole thing is that the "reality" that this is all based on in is usually just a cherry-picked and curated universe where people live artificially-perfect lives, perpetuating "facebook envy."
If they want to manipulate someone into feeling sad I honestly have no idea if that means they show them the news that Uncle Torgo died or they show them the news that their BFF just bought a new sports car to celebrate a huge promotion.
posted by Room 641-A at 8:54 AM on July 3, 2014
Everyone: if you want to participate voluntarily in social network experiments done RIGHT, I urge you to check out volunteerscience.com. You can find descriptions of their study questions -- and the IRB -- here. Sign up; tell your friends.
David Lazer, who is one of the leaders of the VolunteerScience project and an extremely well-respected social networks researcher, wrote this modest proposal to Facebook, suggesting that such companies "create opt-in experimental panels, with an initial clear, short, transparent and in-your-face, fairly general and flexible consent about the types of ways their sociotechnical environment would be (modestly) experimentally varied" -- just as they do for VolunteerScience.
posted by Westringia F. at 9:11 AM on July 3, 2014
David Lazer, who is one of the leaders of the VolunteerScience project and an extremely well-respected social networks researcher, wrote this modest proposal to Facebook, suggesting that such companies "create opt-in experimental panels, with an initial clear, short, transparent and in-your-face, fairly general and flexible consent about the types of ways their sociotechnical environment would be (modestly) experimentally varied" -- just as they do for VolunteerScience.
posted by Westringia F. at 9:11 AM on July 3, 2014
cjorgensen, I think I get all that (and I'm a software engineer specializing in CMS, btw) but what I don't understand is why even straight standard A/B testing on a single centralized message instead of en masse on interpersonal communication would count as a kind of experimentation other than experimentation involving human subjects, purportedly in the view of a bioethicist according to that op-ed writer.
posted by XMLicious at 10:02 AM on July 3, 2014
posted by XMLicious at 10:02 AM on July 3, 2014
PNAS has now published an Editorial Expression of Concern. That is what academic journals do as an early response to some matter that may eventually lead to retraction. Although possible retraction or further investigation does not seem envisioned here.
posted by grouse at 1:40 PM on July 3, 2014 [3 favorites]
posted by grouse at 1:40 PM on July 3, 2014 [3 favorites]
I just still can't grok why they even went with A/B testing in the first place. Doing an experiment within Facebook to observe behavior is like building a zoo in the middle of the serengeti plains. When you're God.
posted by iamkimiam at 1:40 PM on July 3, 2014 [3 favorites]
posted by iamkimiam at 1:40 PM on July 3, 2014 [3 favorites]
Coke and Pepsi spend millions to fight over 2% of marketshare. There was also a time when Friendster was God before MySpace became God. When you are a God it does pay to figure out how to keep your worshipers happy or you are the new digg.
posted by cjorgensen at 5:08 PM on July 3, 2014 [1 favorite]
posted by cjorgensen at 5:08 PM on July 3, 2014 [1 favorite]
it is hard to pass the message on coursera that emotions are important in teaching but that expressing those emotions can lead to data collection.
posted by b9492e7f929dab23426aa2b344d3d5bef083f7e1 at 6:45 PM on July 3, 2014 [1 favorite]
posted by b9492e7f929dab23426aa2b344d3d5bef083f7e1 at 6:45 PM on July 3, 2014 [1 favorite]
I guess, "It's free and always will be," was focus-grouped away from the original, "You are the product being sold and always will be."
posted by ricochet biscuit at 9:56 AM on July 4, 2014
posted by ricochet biscuit at 9:56 AM on July 4, 2014
Facebook's Sandberg: We 'really regret' our secret-test misfire.
"we really regret getting caught"
posted by elizardbits at 12:22 PM on July 4, 2014 [11 favorites]
"we really regret getting caught"
posted by elizardbits at 12:22 PM on July 4, 2014 [11 favorites]
Facebook denies emotion contagion study had government and military ties
If Facebook can tweak our emotions and make us vote, what else can it do?
posted by XMLicious at 11:56 PM on July 4, 2014
If Facebook can tweak our emotions and make us vote, what else can it do?
posted by XMLicious at 11:56 PM on July 4, 2014
Sites do A/B testing all the time
I know you're devil's advocating here, but anyway, A/B testing does not involve inducing and then analyzing emotional states, but (as you say) looking at web metrics (clickthroughs etc.) to see which design tweaks 'work' better. A lot of the underlying theory here is basic cognitive psychology; do this X, and the predicted cognitive response (in the web metrics) is Y.
Advertising does this.
Obama etc.
These activities pretty much focus on alternate ways to make you feel positive about a product (including Obama), and not on making you miserable half the time.
IRBs state that studies that have the potential to have negative outcomes (e.g. negative emotional states in subjects) have to be tightly controlled.
That PNAS statement puts 'privacy' issues upfront; it looks like maybe they don't get it either.
posted by carter at 3:42 PM on July 7, 2014 [1 favorite]
I know you're devil's advocating here, but anyway, A/B testing does not involve inducing and then analyzing emotional states, but (as you say) looking at web metrics (clickthroughs etc.) to see which design tweaks 'work' better. A lot of the underlying theory here is basic cognitive psychology; do this X, and the predicted cognitive response (in the web metrics) is Y.
Advertising does this.
Obama etc.
These activities pretty much focus on alternate ways to make you feel positive about a product (including Obama), and not on making you miserable half the time.
IRBs state that studies that have the potential to have negative outcomes (e.g. negative emotional states in subjects) have to be tightly controlled.
That PNAS statement puts 'privacy' issues upfront; it looks like maybe they don't get it either.
posted by carter at 3:42 PM on July 7, 2014 [1 favorite]
Facebook Has Always Manipulated Your Emotions
posted by the man of twists and turns at 8:19 PM on July 8, 2014
posted by the man of twists and turns at 8:19 PM on July 8, 2014
That article says that the reason why this is an unremarkable case of Facebook manipulating its users is that it offers a "Like" button and emoticons for expressing oneself, but no "Dislike" button, and that asymmetry of options has a greater emotional effect (according to the author, a sociology professor) than this particular instance of filtering the content coming from other people.
Which is an even poorer way of dismissing the issue than bringing up A/B testing, IMO. There's no "Dislike" button nor any "Like" button either for expressing yourself about something you never get to see in the first place.
posted by XMLicious at 10:08 PM on July 8, 2014
Which is an even poorer way of dismissing the issue than bringing up A/B testing, IMO. There's no "Dislike" button nor any "Like" button either for expressing yourself about something you never get to see in the first place.
posted by XMLicious at 10:08 PM on July 8, 2014
Seeing More Politics in Your News Feed? Facebook Boosts Partisan Sites. (Not related to the study.)
posted by Nelson at 11:08 AM on July 14, 2014 [1 favorite]
posted by Nelson at 11:08 AM on July 14, 2014 [1 favorite]
« Older This task, this need, is that of holding itself up... | Anything that has to be laid straight, she asks... Newer »
This thread has been archived and is closed to new comments
posted by k8t at 7:17 AM on June 28, 2014 [8 favorites]