Women in STEM fields
April 14, 2015 10:47 AM Subscribe
An empirical study by Wendy Williams and Stephen Ceci at Cornell University found that when using identical qualifications, but changing the sex of the applicant, "women candidates are favored 2 to 1 over men for tenure-track positions in the science, technology, engineering and math fields."
The researchers note however that "some traditional values emerged .. when we looked at the effects of lifestyles on hiring" (CNN).
The researchers note however that "some traditional values emerged .. when we looked at the effects of lifestyles on hiring" (CNN).
Interesting discussion over at Reddit on this (I know). They weren't impressed. Commenters thought the conclusions were overly broad, and didn't take into account the fact that the results were from the course of a study, which might throw people's reactions off.
posted by zabuni at 10:54 AM on April 14, 2015
posted by zabuni at 10:54 AM on April 14, 2015
That would be great advice if there were still lot of tenure track jobs!
posted by ChuraChura at 10:55 AM on April 14, 2015 [27 favorites]
posted by ChuraChura at 10:55 AM on April 14, 2015 [27 favorites]
Interesting. It's probably not all that surprising when you consider that university departments are increasingly getting real administrative heat put on them if they consistently look like demographic outliers in their hiring. I imagine most STEM field departments spend a lot of time worrying about how they can attract more female hires.
I wonder, though, if we'd see different effects when it comes to the actual point of making decisions between real candidates rather than just looking at hypothetical cases? That is, it's easy to say "oh, yeah, we really need to hire more women; this hypothetical woman looks great! That's the kind of person we should hire!" But when it comes down to a real-world choice between woman-candidate A and male-candidate B I wonder if some of the unconscious biases in favor of the male candidate reassert themselves? Unfortunately, there's no practical way of running dummy applications through the faculty hiring process to test that out.
posted by yoink at 10:57 AM on April 14, 2015 [19 favorites]
I wonder, though, if we'd see different effects when it comes to the actual point of making decisions between real candidates rather than just looking at hypothetical cases? That is, it's easy to say "oh, yeah, we really need to hire more women; this hypothetical woman looks great! That's the kind of person we should hire!" But when it comes down to a real-world choice between woman-candidate A and male-candidate B I wonder if some of the unconscious biases in favor of the male candidate reassert themselves? Unfortunately, there's no practical way of running dummy applications through the faculty hiring process to test that out.
posted by yoink at 10:57 AM on April 14, 2015 [19 favorites]
The IHE article actually goes into where the study falls short. For instance, it doesn't seem to be testing actual hiring (a decision that may not be limited to the groups the researchers contacted), nor does it deal with gender bias in the STEM fields for those already in those jobs. And really, their assertion that "it is an auspicious time to be a talented woman launching a STEM tenure-track academic career" isn't necessarily supported by other research, such as that of the US Census Bureau. Maybe it's more "auspicious" than it has been over the years, but that's not the same as being great overall.
posted by zombieflanders at 11:01 AM on April 14, 2015 [8 favorites]
posted by zombieflanders at 11:01 AM on April 14, 2015 [8 favorites]
"This Direct Submission article had a prearranged editor."
A reddit commenter implies that this means the article was not peer-reviewed. Is that true?
posted by muddgirl at 11:02 AM on April 14, 2015
A reddit commenter implies that this means the article was not peer-reviewed. Is that true?
posted by muddgirl at 11:02 AM on April 14, 2015
If there were two otherwise identical applications, I'd be more inclined to hire the woman. My social circle was STEM grad students for a while, and the ratio of men to women was at least 2:1.
posted by aniola at 11:03 AM on April 14, 2015
posted by aniola at 11:03 AM on April 14, 2015
" Unfortunately, there's no practical way of running dummy applications through the faculty hiring process to test that out."
I guess I don't know enough about hiring committees in academia, but my understanding was that they wouldn't be getting a narrative summary of the candidates in an abstract setting. Don't studies dealing with hiring bias general send in C.V.s and track response rates?
posted by klangklangston at 11:04 AM on April 14, 2015
I guess I don't know enough about hiring committees in academia, but my understanding was that they wouldn't be getting a narrative summary of the candidates in an abstract setting. Don't studies dealing with hiring bias general send in C.V.s and track response rates?
posted by klangklangston at 11:04 AM on April 14, 2015
"This Direct Submission article had a prearranged editor."
A reddit commenter implies that this means the article was not peer-reviewed. Is that true?
It doesn't appear to be true - I should google before asking.
posted by muddgirl at 11:05 AM on April 14, 2015 [1 favorite]
A reddit commenter implies that this means the article was not peer-reviewed. Is that true?
It doesn't appear to be true - I should google before asking.
posted by muddgirl at 11:05 AM on April 14, 2015 [1 favorite]
Speaking as a man, albeit one not involved in a STEM field: Good.
posted by Faint of Butt at 11:06 AM on April 14, 2015
posted by Faint of Butt at 11:06 AM on April 14, 2015
The IHE article actually goes into where the study falls short. For instance, it doesn't seem to be testing actual hiring (a decision that may not be limited to the groups the researchers contacted), nor does it deal with gender bias in the STEM fields for those already in those jobs
The study is an article, not a book. An article doesn't "fall short" if it doesn't test the hypothesis you want it to test. Gender bias in STEM for those already in those positions is a different issue than the one this study addresses.
posted by MisantropicPainforest at 11:07 AM on April 14, 2015 [4 favorites]
The study is an article, not a book. An article doesn't "fall short" if it doesn't test the hypothesis you want it to test. Gender bias in STEM for those already in those positions is a different issue than the one this study addresses.
posted by MisantropicPainforest at 11:07 AM on April 14, 2015 [4 favorites]
Sure, but even if their research is sound, getting hired and then experiencing gender bias in one's job (essentially a 100% certainty according to other research) doesn't count as "auspicious" in my book.
posted by zombieflanders at 11:12 AM on April 14, 2015
posted by zombieflanders at 11:12 AM on April 14, 2015
If there were two identical applications, I'd hire the woman.
That's a reasonable position, but then again if there were "two identical applications" to any academic position ever, it would be a miracle. If you're trying to overcome years of systemic discrimination, the "well, we'll give it to the underrepresented demographic in those cases where all other things are equal" approach just isn't going to cut it.
I think one of the real problems we see in studies of real-world discriminatory effects in hiring is not that you have somebody looking at two identical applications and going "hey, this one has a penis! Let's hire the one with the penis!" It's that when you learn that applicant A has a penis and applicant B doesn't, you start reading special significance into the inevitable differences between A's and B's files that you wouldn't read the same way if they both had penises or if the penis-having was, um, on the other foot. So A went to Yale and B went to Princeton and suddenly that Yale background ("hey, they were trained by so-and-so!") seems really important. Or A has a year off between their first post-doc and their Visiting Assistant Professorship and how troubled you are by that varies according to the gender of the applicant, and so on. I think these sorts of things are often very resistant to simple self-awareness and self-examination because you can always create narratives around even the most minor differences between files (and between job talks and campus visits and interviews and so forth) that seem perfectly plausible, but which may be rooted in some very dubious sets of assumptions.
posted by yoink at 11:13 AM on April 14, 2015 [42 favorites]
That's a reasonable position, but then again if there were "two identical applications" to any academic position ever, it would be a miracle. If you're trying to overcome years of systemic discrimination, the "well, we'll give it to the underrepresented demographic in those cases where all other things are equal" approach just isn't going to cut it.
I think one of the real problems we see in studies of real-world discriminatory effects in hiring is not that you have somebody looking at two identical applications and going "hey, this one has a penis! Let's hire the one with the penis!" It's that when you learn that applicant A has a penis and applicant B doesn't, you start reading special significance into the inevitable differences between A's and B's files that you wouldn't read the same way if they both had penises or if the penis-having was, um, on the other foot. So A went to Yale and B went to Princeton and suddenly that Yale background ("hey, they were trained by so-and-so!") seems really important. Or A has a year off between their first post-doc and their Visiting Assistant Professorship and how troubled you are by that varies according to the gender of the applicant, and so on. I think these sorts of things are often very resistant to simple self-awareness and self-examination because you can always create narratives around even the most minor differences between files (and between job talks and campus visits and interviews and so forth) that seem perfectly plausible, but which may be rooted in some very dubious sets of assumptions.
posted by yoink at 11:13 AM on April 14, 2015 [42 favorites]
There is no "propitious time for women" anyone "beginning careers in academic science" because the job prospects outside academia are astronomical compared to the job prospects in academia, especially for anyone in "math-based fields of science".
Academics in STEM fields can either teach introductory classes at non-research universities, or else spend a decade moving to new cities every few years. And that decade plays havoc with people's relationships, family lives, etc.
I've numerous friends who quit academia for relationships, both male and female. At least in my personal experience, women more often quit proactively before they even had the particular relationship they wished to keep.
I'd suspect that STEM fields shed women faster simply because women pay attention and consider longer term consequences more rationally, so amusingly studies like this might actually improve matters by impacting some women's risk assessments.
I'm surprised by their assertion that the literature contained no "empirical [studies] of sexism in faculty hiring using actual faculty members as evaluators and focusing on fields in which women are most underrepresented", but maybe the social sciences, arts, etc. work out differently and previous work focussed there.
posted by jeffburdges at 11:14 AM on April 14, 2015 [3 favorites]
Academics in STEM fields can either teach introductory classes at non-research universities, or else spend a decade moving to new cities every few years. And that decade plays havoc with people's relationships, family lives, etc.
I've numerous friends who quit academia for relationships, both male and female. At least in my personal experience, women more often quit proactively before they even had the particular relationship they wished to keep.
I'd suspect that STEM fields shed women faster simply because women pay attention and consider longer term consequences more rationally, so amusingly studies like this might actually improve matters by impacting some women's risk assessments.
I'm surprised by their assertion that the literature contained no "empirical [studies] of sexism in faculty hiring using actual faculty members as evaluators and focusing on fields in which women are most underrepresented", but maybe the social sciences, arts, etc. work out differently and previous work focussed there.
posted by jeffburdges at 11:14 AM on April 14, 2015 [3 favorites]
As an anecdata point I can say that my (R1 level, physics) department has hired more women than men over the last five years (which is actually only 5 hires). This despite a more than 10:1 sex ratio in applicants on average. And in the two cases that men were hired, women were given the initial offers but decided to go elsewhere as each had several competing offers. This is because the (younger, research active) people on the search committees, which sometimes included me, were making a real effort to improve the makeup of the department.
posted by overhauser at 11:28 AM on April 14, 2015 [3 favorites]
posted by overhauser at 11:28 AM on April 14, 2015 [3 favorites]
I guess I don't know enough about hiring committees in academia, but my understanding was that they wouldn't be getting a narrative summary of the candidates in an abstract setting. Don't studies dealing with hiring bias general send in C.V.s and track response rates?
Yeah, but that's not what they did here and not really workable in the context of an academic job search. You don't have HR departments hiring faculty from submitted CVs. It's a much more participatory and intimate process. There's no way that the faculty who were sent these narrative summaries of hypothetical candidates would not know A) that the department wasn't actually advertising for a position, B) that these summaries were not the way that information about potential hires gets distributed to the department etc. And there's really no way to make this work. Say I set up an experiment where I only use departments that are actually hiring. I submit a bunch of fake CVs. Well, I also need to submit a bunch of fake letters of reference. But those letters of reference have to be from real people, because if the letter of reference is from Professor Fakely from Fake U, the people on the hiring committee are likely to know. And then the CV can't list any fake articles or research presentations, because if it does and the person on the research committee thinks "hey, I read that issue of Important Research in Your Field and didn't notice that article" or "hey, I organized that panel at Important Research Conference in Your Field and s/he wasn't on it!" the application will get tossed immediately. Etc. etc. etc. It just can't be done because there are insufficient degrees of separation between the people doing the hiring and the people applying for the job.
posted by yoink at 11:29 AM on April 14, 2015 [8 favorites]
Yeah, but that's not what they did here and not really workable in the context of an academic job search. You don't have HR departments hiring faculty from submitted CVs. It's a much more participatory and intimate process. There's no way that the faculty who were sent these narrative summaries of hypothetical candidates would not know A) that the department wasn't actually advertising for a position, B) that these summaries were not the way that information about potential hires gets distributed to the department etc. And there's really no way to make this work. Say I set up an experiment where I only use departments that are actually hiring. I submit a bunch of fake CVs. Well, I also need to submit a bunch of fake letters of reference. But those letters of reference have to be from real people, because if the letter of reference is from Professor Fakely from Fake U, the people on the hiring committee are likely to know. And then the CV can't list any fake articles or research presentations, because if it does and the person on the research committee thinks "hey, I read that issue of Important Research in Your Field and didn't notice that article" or "hey, I organized that panel at Important Research Conference in Your Field and s/he wasn't on it!" the application will get tossed immediately. Etc. etc. etc. It just can't be done because there are insufficient degrees of separation between the people doing the hiring and the people applying for the job.
posted by yoink at 11:29 AM on April 14, 2015 [8 favorites]
Worse, there is no way you'll get hired unless you've a "champion" on the committee anyways, which sounds horribly fraught. You could ask faculty members to rate CVs before seeing letters of reference during actual hiring though. In fact, you'd improve results that way by reducing that initial nepotism filter, and allow letters more time to arrive too, so this study mirrors one phase in a potentially better hiring process.
posted by jeffburdges at 11:43 AM on April 14, 2015
posted by jeffburdges at 11:43 AM on April 14, 2015
so this study mirrors one phase in a potentially better hiring process.
Except that this study wasn't just a matter of giving people CVs and asking them to compare them: they gave them little hypothetical narratives (including stuff about the applicant's personal life and history that a CV wouldn't include and stuff about how an interview committee responded to them--so they're really asking you to think about people who are far along in the hiring process).
You could ask faculty members to rate CVs before seeing letters of reference during actual hiring though.
Effectively that happens already, though. You read the application letter and the CV, and if you think the person is remotely possible you go on and read the letters of recommendation. But, again, the problem with academia is the small-world effect: the letter itself is going to tell you if the person has worked with a close colleague of yours, for example, so there's no way to just rate CVs in the "abstract" free of other influences.
As to using something like that stage to do research on hiring biases: again, you have the problem that there's no way to create sufficiently plausible fake CVs.
posted by yoink at 11:59 AM on April 14, 2015
Except that this study wasn't just a matter of giving people CVs and asking them to compare them: they gave them little hypothetical narratives (including stuff about the applicant's personal life and history that a CV wouldn't include and stuff about how an interview committee responded to them--so they're really asking you to think about people who are far along in the hiring process).
You could ask faculty members to rate CVs before seeing letters of reference during actual hiring though.
Effectively that happens already, though. You read the application letter and the CV, and if you think the person is remotely possible you go on and read the letters of recommendation. But, again, the problem with academia is the small-world effect: the letter itself is going to tell you if the person has worked with a close colleague of yours, for example, so there's no way to just rate CVs in the "abstract" free of other influences.
As to using something like that stage to do research on hiring biases: again, you have the problem that there's no way to create sufficiently plausible fake CVs.
posted by yoink at 11:59 AM on April 14, 2015
Good.
posted by likeatoaster at 12:26 PM on April 14, 2015
posted by likeatoaster at 12:26 PM on April 14, 2015
In the past two years I've hired five post-docs in to junior researcher jobs (we're not academic), three women, two men. No intention either way, just hiring the best qualified candidates.
posted by bonehead at 12:49 PM on April 14, 2015 [1 favorite]
posted by bonehead at 12:49 PM on April 14, 2015 [1 favorite]
Of course, there will never be a tenured professor who makes as much as a Silicon Valley CEO...
My Silicon Valley company hires professors away from academia into industry. Most come in as mid-level employees. So it's not surprising they make mid-level employee pay.
posted by GuyZero at 12:51 PM on April 14, 2015
My Silicon Valley company hires professors away from academia into industry. Most come in as mid-level employees. So it's not surprising they make mid-level employee pay.
posted by GuyZero at 12:51 PM on April 14, 2015
I don't have gender stats on our hiring pools, roughly 600 applicants for a numbers of pool. I could pull them out I suppose, but it would take a few hours of manual work. However I don't recall them being hugely one way or the other.
posted by bonehead at 12:52 PM on April 14, 2015
posted by bonehead at 12:52 PM on April 14, 2015
I don't have any quibble with the study itself, or the results, but the conclusion that the researchers reached seems overly broad based on the very limited scope of the study. Looking at the hiring biases at the end of the track and concluding that we're on our way to fixing the problems with STEM is seriously flawed. I also disagree with the researcher's statements that tenure-track positions only attract "exceptional" candidates, so that's why their study only compared exceptional candidates. Different colleges are looking for different kinds of candidates, so while professors at those colleges may rank hypothetical candidates the same way, that doesn't necessarily mean they'd ever have access to that tier of candidates, or would even hire them if they had the chance (for example, undergraduate-focused programs are going to look for candidates with strong teaching backgrounds, even if their research strengths are lower).
posted by muddgirl at 1:05 PM on April 14, 2015 [1 favorite]
posted by muddgirl at 1:05 PM on April 14, 2015 [1 favorite]
I also disagree with the researcher's statements that tenure-track positions only attract "exceptional" candidates, so that's why their study only compared exceptional candidates.
Based on my experience, I disagree. We hire out of the same pool post-docs as the R1s for a substantially similar job, minus undergraduate teaching mostly. We actually pay a bit less than the average starting assistant prof, and our "tenure track" is three years rather than five or six. We still get at least 150 to 200 qualified applicants per position (i.e. that really meet the minimums, as opposed to just spam applications). We're not just looking at the cream, but the top of the cream.
posted by bonehead at 1:14 PM on April 14, 2015
Based on my experience, I disagree. We hire out of the same pool post-docs as the R1s for a substantially similar job, minus undergraduate teaching mostly. We actually pay a bit less than the average starting assistant prof, and our "tenure track" is three years rather than five or six. We still get at least 150 to 200 qualified applicants per position (i.e. that really meet the minimums, as opposed to just spam applications). We're not just looking at the cream, but the top of the cream.
posted by bonehead at 1:14 PM on April 14, 2015
So where do the other 199 candidates (aka, the cream) go?
posted by muddgirl at 1:26 PM on April 14, 2015
posted by muddgirl at 1:26 PM on April 14, 2015
I'm having a hard time seeing how these hypothetical studies can have any relevance to the real world, particularly the small world of academia. People may make different judgements when they know those judgements are being assessed by researchers, even if they don't know what they're being assessed for. And it's not just unidentifiable judgement biases that are a problem, but also things like networking and straight up nepotism. This study might tell us something about how men are really selected compared to women, but I'm not sure how we can find that out, given the problems discussed further up the thread.
posted by howfar at 1:33 PM on April 14, 2015 [1 favorite]
posted by howfar at 1:33 PM on April 14, 2015 [1 favorite]
these hypothetical studies
This wasn't a hypothetical study. It was an actual study.
posted by MisantropicPainforest at 1:43 PM on April 14, 2015
This wasn't a hypothetical study. It was an actual study.
posted by MisantropicPainforest at 1:43 PM on April 14, 2015
So where do the other 199 candidates ... go?
There's been a lot of ink spilled on that topic too.
Nature did a special issue on it a few years ago.
The Economist has a (rather bitter) view.
This is by someone in the environment I have been hiring in.
As near as I can tell, the answer is attrition. They don't end up working in their subject matter fields. The system is brutal, and a lot of people think it should change, but no one knows how to do it.
posted by bonehead at 1:45 PM on April 14, 2015 [1 favorite]
There's been a lot of ink spilled on that topic too.
Nature did a special issue on it a few years ago.
The Economist has a (rather bitter) view.
This is by someone in the environment I have been hiring in.
As near as I can tell, the answer is attrition. They don't end up working in their subject matter fields. The system is brutal, and a lot of people think it should change, but no one knows how to do it.
posted by bonehead at 1:45 PM on April 14, 2015 [1 favorite]
I wish people in 2015 would stop referring to the 70s as "thirty years ago."
posted by oceanjesse at 1:48 PM on April 14, 2015 [5 favorites]
posted by oceanjesse at 1:48 PM on April 14, 2015 [5 favorites]
Apparently it was Danny DeVito in the The War of the Roses who made this joke famous:
“What do you call 500 lawyers at the bottom of the ocean?”
“A good start.”
I’m glad this sample of academics is making an effort, although I’d be interested in how representative their interview sample is and how much “I’m being interviewed” affected their responses. But it’s a good start!
posted by RedOrGreen at 1:56 PM on April 14, 2015
“What do you call 500 lawyers at the bottom of the ocean?”
“A good start.”
I’m glad this sample of academics is making an effort, although I’d be interested in how representative their interview sample is and how much “I’m being interviewed” affected their responses. But it’s a good start!
posted by RedOrGreen at 1:56 PM on April 14, 2015
No, I understand attrition. I get that colleges are competing for the same candidates, and there are only so many positions that open every year, but the same candidate can't work multiple jobs. Not every college is going to get the potential Nobel prize winner who also skydives. Maybe my problem is that I'm underestimating the number of exceptional post-docs? I've definitely met some tenure-track engineering professors at a mid-tier state institution who seemed to meet just the basic qualifications for the job.
posted by muddgirl at 2:06 PM on April 14, 2015
posted by muddgirl at 2:06 PM on April 14, 2015
Well, it is an interesting conclusion. But, a process similar to the one described by the researchers would only get a person through the door. Potential faculty also have to interact with, meet, present research, etc to the community. My thought is that only looking on paper, any potential biases are easier to overcome or ignore. More insidious are the small, non-overt biases that come in to play through face-to-face interactions, tenure decisions, career support, etc.
This experiment, if I were to use lab nomenclature, is in vitro. Which as many know, often have drawbacks and are only a microcosm, a reduced version of the real, in situ conditions. It doesn't mean that the conclusions that they reached are necessarily wrong. I would hesitate to draw huge inferences based on such results; instead I would find other ways to test it, using other models. Any conclusions made are only as good as the model the studies are based upon. And if my results clash with the vast amount of published data that examine the effect in situ, I might want to be careful how far I try to interpret my study.
posted by nasayre at 2:17 PM on April 14, 2015 [2 favorites]
This experiment, if I were to use lab nomenclature, is in vitro. Which as many know, often have drawbacks and are only a microcosm, a reduced version of the real, in situ conditions. It doesn't mean that the conclusions that they reached are necessarily wrong. I would hesitate to draw huge inferences based on such results; instead I would find other ways to test it, using other models. Any conclusions made are only as good as the model the studies are based upon. And if my results clash with the vast amount of published data that examine the effect in situ, I might want to be careful how far I try to interpret my study.
posted by nasayre at 2:17 PM on April 14, 2015 [2 favorites]
This wasn't a hypothetical study. It was an actual study.
True enough, but it was an actual study of hypothetical cases. In other words it studied cases where you asked people "if you were considering this hypothetical person for a job, how favorably would you feel towards them?" And that's a very different thing from presenting people with what they take to be actual cases and seeing how they respond.
posted by yoink at 2:28 PM on April 14, 2015 [1 favorite]
True enough, but it was an actual study of hypothetical cases. In other words it studied cases where you asked people "if you were considering this hypothetical person for a job, how favorably would you feel towards them?" And that's a very different thing from presenting people with what they take to be actual cases and seeing how they respond.
posted by yoink at 2:28 PM on April 14, 2015 [1 favorite]
If there were two identical applications, I'd hire the woman.
You can pay them less. You can advance them slower. You can use them for unrecognized departmental admin and organization responsibilities. You can use them to provide informal counseling support for faculty and students. You can overload them with teaching due to their subject popularity and interpersonal skills.
Frankly I am surprised that they are not doing better than 2 to 1 given how undercompensated they are but I guess you have to still hire some men inorder to create the slack that women end up having to pull.
posted by srboisvert at 3:09 PM on April 14, 2015 [12 favorites]
You can pay them less. You can advance them slower. You can use them for unrecognized departmental admin and organization responsibilities. You can use them to provide informal counseling support for faculty and students. You can overload them with teaching due to their subject popularity and interpersonal skills.
Frankly I am surprised that they are not doing better than 2 to 1 given how undercompensated they are but I guess you have to still hire some men inorder to create the slack that women end up having to pull.
posted by srboisvert at 3:09 PM on April 14, 2015 [12 favorites]
instead I would find other ways to test it, using other models.
Such as....?????
Also: you have a limited budget, must be approved by IRB, and the entire thing must be able to be presented in six pages or less. GO!
posted by MisantropicPainforest at 3:13 PM on April 14, 2015
Such as....?????
Also: you have a limited budget, must be approved by IRB, and the entire thing must be able to be presented in six pages or less. GO!
posted by MisantropicPainforest at 3:13 PM on April 14, 2015
The complete running away from any risk of cognitive dissonance or suggesting this might be a problem in favour of ridiculous stereotypes and signalling of "I am ok with this" in this thread is hilarisad.
posted by Another Fine Product From The Nonsense Factory at 3:17 PM on April 14, 2015 [2 favorites]
posted by Another Fine Product From The Nonsense Factory at 3:17 PM on April 14, 2015 [2 favorites]
huh
posted by MisantropicPainforest at 3:20 PM on April 14, 2015
posted by MisantropicPainforest at 3:20 PM on April 14, 2015
Gosh, it's almost like some of the people in this thread are women in tech professions or tech academia! And thus are speaking from a position of intimate experience with the institutions we are speaking about! But that couldn't possibly be true...
posted by muddgirl at 3:25 PM on April 14, 2015 [7 favorites]
posted by muddgirl at 3:25 PM on April 14, 2015 [7 favorites]
I just had a look at the rumor mill for my field. There are a few good women on the shortlists there. 20% of the names on short lists are female. 28% of the current offers have gone to women.
Unfortunately several of these are the same women (there are only around 8 unique women listed on the shortlists), so I can't guess how it will shake out. But neither of those numbers is strongly out of sync with the actual number of women in the field (approximately 20% of newly earned PhDs in physics in the US were female in 2012).
Actually this is less depressing than I expected, so thanks for posting this study and making me go look.
Anyhow, I'm too lazy right now, but it seems to me there are plenty of (more or less accurate) rumor mills for plenty of STEM fields. Why not look at some actual real world data about actual actions, compare shortlists to hires, and see if the bias persists? I think it's a pretty reasonable approximation to say that everyone who makes a shortlist is of roughly the same quality. (Of course everyone who makes a shortlist has already passed some filter, which itself may have bias, but this is a place to start with real world data).
(I don't tend to trust survey-based studies much because I too am good at proclaiming myself to have no bias.)
Also, frankly, call me when I am not currently attending a workshop where less than 10% of the speakers are female.
posted by nat at 3:57 PM on April 14, 2015 [6 favorites]
Unfortunately several of these are the same women (there are only around 8 unique women listed on the shortlists), so I can't guess how it will shake out. But neither of those numbers is strongly out of sync with the actual number of women in the field (approximately 20% of newly earned PhDs in physics in the US were female in 2012).
Actually this is less depressing than I expected, so thanks for posting this study and making me go look.
Anyhow, I'm too lazy right now, but it seems to me there are plenty of (more or less accurate) rumor mills for plenty of STEM fields. Why not look at some actual real world data about actual actions, compare shortlists to hires, and see if the bias persists? I think it's a pretty reasonable approximation to say that everyone who makes a shortlist is of roughly the same quality. (Of course everyone who makes a shortlist has already passed some filter, which itself may have bias, but this is a place to start with real world data).
(I don't tend to trust survey-based studies much because I too am good at proclaiming myself to have no bias.)
Also, frankly, call me when I am not currently attending a workshop where less than 10% of the speakers are female.
posted by nat at 3:57 PM on April 14, 2015 [6 favorites]
Why not look at some actual real world data about actual actions, compare shortlists to hires, and see if the bias persists?
Because there are a whole lot of endogeneity problems with observational data on this topic.
posted by MisantropicPainforest at 4:37 PM on April 14, 2015
Because there are a whole lot of endogeneity problems with observational data on this topic.
posted by MisantropicPainforest at 4:37 PM on April 14, 2015
I guess I don't know enough about hiring committees in academia, but my understanding was that they wouldn't be getting a narrative summary of the candidates in an abstract setting. Don't studies dealing with hiring bias general send in C.V.s and track response rates?
As mentioned above, it would be a lot harder to do this than just compiling fake CVs. A full application package for a tenure track position will typically include the letter (which should be at least somewhat tailored to the position, or at least the type of position -- an R1 school requires a different letter than does a liberal arts college, and both of those are different than a Directional State U), the CV, the letters of reference (which as mentioned are going to be from people known to the search committee), sometimes a statement of teaching philosophy and/or a statement on diversity, a writing sample (eg a dissertation chapter or a published article), plus supplementary materials such as teaching reviews, syllabi, etc.
These days it is all submitted electronically, but it totals out to a big package when printed and it would not be trivial to fake it, unlike those studies where they sent in fake resumes for entry level positions at big companies and at university labs.
posted by Dip Flash at 7:52 PM on April 14, 2015 [1 favorite]
As mentioned above, it would be a lot harder to do this than just compiling fake CVs. A full application package for a tenure track position will typically include the letter (which should be at least somewhat tailored to the position, or at least the type of position -- an R1 school requires a different letter than does a liberal arts college, and both of those are different than a Directional State U), the CV, the letters of reference (which as mentioned are going to be from people known to the search committee), sometimes a statement of teaching philosophy and/or a statement on diversity, a writing sample (eg a dissertation chapter or a published article), plus supplementary materials such as teaching reviews, syllabi, etc.
These days it is all submitted electronically, but it totals out to a big package when printed and it would not be trivial to fake it, unlike those studies where they sent in fake resumes for entry level positions at big companies and at university labs.
posted by Dip Flash at 7:52 PM on April 14, 2015 [1 favorite]
If the study had the opposite conclusion we wouldn't be nit-picking it half as much.
posted by Joe in Australia at 8:00 PM on April 14, 2015
posted by Joe in Australia at 8:00 PM on April 14, 2015
Yes, novel, contrarian and controversial conclusions do generally attract more scrutiny.
posted by klangklangston at 8:26 PM on April 14, 2015
posted by klangklangston at 8:26 PM on April 14, 2015
Yes, novel, contrarian and controversial conclusions do generally attract more scrutiny.
I would be wary of the sudden jump to criticise the methodology, if only because it produces some strange bedfellows. There certainly are people who have attacked this methodology in the past, and who have done so for ideological reasons (because the results up to now did not conform to their narratives), but I suspect they are not the sort of people one generally wishes to be associated with, or seen to be making the same arguments as. At least, not in this space.
I note, for example, that just a bit above where I'm typing now, srboisvert has collected quite a few favorites for a comment which literally begins with a common MRA-arguing-against-wage-gap talking point. Strange bedfellows indeed.
posted by hrwj at 10:43 PM on April 14, 2015
I would be wary of the sudden jump to criticise the methodology, if only because it produces some strange bedfellows. There certainly are people who have attacked this methodology in the past, and who have done so for ideological reasons (because the results up to now did not conform to their narratives), but I suspect they are not the sort of people one generally wishes to be associated with, or seen to be making the same arguments as. At least, not in this space.
I note, for example, that just a bit above where I'm typing now, srboisvert has collected quite a few favorites for a comment which literally begins with a common MRA-arguing-against-wage-gap talking point. Strange bedfellows indeed.
posted by hrwj at 10:43 PM on April 14, 2015
Meh. I stayed a vegetarian even after I found out Hitler was too. There are wage gap studies with various levels of reliability, but the underlying premise that the wage gap is unfair is a political one, not a scientific one.
posted by klangklangston at 12:29 AM on April 15, 2015
posted by klangklangston at 12:29 AM on April 15, 2015
I have questions about the methodology because these results contradict the results of many other studies and are contrary to what the data actually show us about who gets jobs in academia. If a hypothetical question (who would you hire?) contradicts the actual data (who gets hired?), that is grounds for concern.
And also, I went to grad school for 9 years to learn how to design research studies and how to critique others' designs and that's a big part of being a researcher. If you ever talk to any social or natural scientist about any study, we usually spend time talking about the experimental design and whether it measures what the authors claim it measures. That's part of good peer review and part of being a good scientist.
posted by hydropsyche at 5:40 AM on April 15, 2015 [4 favorites]
And also, I went to grad school for 9 years to learn how to design research studies and how to critique others' designs and that's a big part of being a researcher. If you ever talk to any social or natural scientist about any study, we usually spend time talking about the experimental design and whether it measures what the authors claim it measures. That's part of good peer review and part of being a good scientist.
posted by hydropsyche at 5:40 AM on April 15, 2015 [4 favorites]
Also in metafilter,on any study, regardless of conclusion, you will have a host of frankly stupid objections, like:
"this is my surprised faced"
"correlation does not equal causation"
If a lab experiment: "you need to do this in the real world"
If a field experiemtn: "you need to do this in a lab"
"the n is less than 100000 so the sample size is too small"
"They should have done XYZ in addition to what they did" (provided XYZ is cost prohibitive or unethical)
posted by MisantropicPainforest at 5:41 AM on April 15, 2015 [2 favorites]
"this is my surprised faced"
"correlation does not equal causation"
If a lab experiment: "you need to do this in the real world"
If a field experiemtn: "you need to do this in a lab"
"the n is less than 100000 so the sample size is too small"
"They should have done XYZ in addition to what they did" (provided XYZ is cost prohibitive or unethical)
posted by MisantropicPainforest at 5:41 AM on April 15, 2015 [2 favorites]
On preview, after reading hydropsyche's point,
my opinion is that good criticisms of methodology are ideologically neutrall.
Bad criticisms are not.
posted by MisantropicPainforest at 5:43 AM on April 15, 2015
my opinion is that good criticisms of methodology are ideologically neutrall.
Bad criticisms are not.
posted by MisantropicPainforest at 5:43 AM on April 15, 2015
So, am I allowed to ask about error bars in a pop science article that provides no indication at all of between group versus within group variation? I certainly would never recommend publication on an article that I was reviewing for a journal that did not provide some indication of variance. Or is that just my ideological bias showing?
posted by hydropsyche at 6:06 AM on April 15, 2015
posted by hydropsyche at 6:06 AM on April 15, 2015
On the contrary: I actually enjoy reading critiques of scientific studies; I think it's a pity that we seem to give a pass to studies that confirm our beliefs.
posted by Joe in Australia at 6:22 AM on April 15, 2015 [1 favorite]
posted by Joe in Australia at 6:22 AM on April 15, 2015 [1 favorite]
I don't think it's Metafilter's job to critique every study - of course it's a benefit when someone does this for the community free of charge, especially for papers that become PR fodder in the pop science press. Papers that are contrarian or reach bold conclusions get more press, so I think it's natural that they should get more public scrutiny as well. That's a good thing. That doesn't mean that non-PR papers aren't getting any scrutiny at all - there's still a community of scientists who are reading, digesting, and responding.
Could you link to a thread on any study here that "we" gave a pass to because it confirmed "our" beliefs?
posted by muddgirl at 7:37 AM on April 15, 2015 [4 favorites]
Could you link to a thread on any study here that "we" gave a pass to because it confirmed "our" beliefs?
posted by muddgirl at 7:37 AM on April 15, 2015 [4 favorites]
Ugh, rereading my comment, my scare quotes come off as passive-aggressive and sarcastic, completely not my intention. I just have a hard time thinking of Metafilter having a collective opinion or belief on topics as complex as human behavior.
posted by muddgirl at 8:49 AM on April 15, 2015
posted by muddgirl at 8:49 AM on April 15, 2015
There certainly are people who have attacked this methodology in the past, and who have done so for ideological reasons (because the results up to now did not conform to their narratives)
"This methodology"? I'm unaware of any previous study using this methodology. So far as I can see, the critiques of this study's methodology in this thread are mostly centered around the fact that it doesn't compare well with previous studies which have engaged with actual hiring practices (e.g. sending fake CVs to HR departments and looking at response rates) and instead examines hypothetical responses to hypothetical cases.
To be precise, that's not in itself a criticism of "methodology" so much as a caution about what the study is actually examining. It simply isn't a study of "hiring practices" in academia--it's a study of what academics abstractly profess their hiring preferences to be in hypothetical instances. That's an interesting thing to study and the findings are not without value; but it's important to recognize that it's something quite different from a study of "who actually gets hired in academia."
I just have a hard time thinking of Metafilter having a collective opinion
Think about any socially or politically divisive issue (racism, police brutality, reproductive rights, economic justice etc.). Now, imagine a discussion in a typical Metafilter thread. I'm pretty sure you can conjure up in your mind the specific usernames of ALL the (small handful of) long- and medium-term Mefites who might voice an opinion to the right of, say, a typical California Democratic voter.
There are certainly issues about which Metafilter disagrees and can have a robust, many-sided discussion, but it's silly to pretend that the userbase here doesn't have pretty marked demographic, social and political characteristics.
posted by yoink at 9:36 AM on April 15, 2015 [4 favorites]
"This methodology"? I'm unaware of any previous study using this methodology. So far as I can see, the critiques of this study's methodology in this thread are mostly centered around the fact that it doesn't compare well with previous studies which have engaged with actual hiring practices (e.g. sending fake CVs to HR departments and looking at response rates) and instead examines hypothetical responses to hypothetical cases.
To be precise, that's not in itself a criticism of "methodology" so much as a caution about what the study is actually examining. It simply isn't a study of "hiring practices" in academia--it's a study of what academics abstractly profess their hiring preferences to be in hypothetical instances. That's an interesting thing to study and the findings are not without value; but it's important to recognize that it's something quite different from a study of "who actually gets hired in academia."
I just have a hard time thinking of Metafilter having a collective opinion
Think about any socially or politically divisive issue (racism, police brutality, reproductive rights, economic justice etc.). Now, imagine a discussion in a typical Metafilter thread. I'm pretty sure you can conjure up in your mind the specific usernames of ALL the (small handful of) long- and medium-term Mefites who might voice an opinion to the right of, say, a typical California Democratic voter.
There are certainly issues about which Metafilter disagrees and can have a robust, many-sided discussion, but it's silly to pretend that the userbase here doesn't have pretty marked demographic, social and political characteristics.
posted by yoink at 9:36 AM on April 15, 2015 [4 favorites]
Certainly on issues of of sexism in STEM, my username is right up there with those preaching to the Mefi choir. And yet, I read the article and like many of the people in this thread had no particular issue with the methodology nor the results. I disagreed that the results were sufficient to lead to the stated conclusion, but I did not nit-pick the methodology as "we" were accused of doing.
To me, the reason I disagreed with the conclusion is not because I have an irrational, unsubstantiated, and unscientific belief in sexism. It's because the results don't disagree with my model of human behavior or my lived-with experiences of sexism, but the conclusion does. Other members of the Mefi echo chamber, right here in this thread, have other models and other lived experiences, and have presented alternative reasonings. Overall it's been a pretty varied discussion, not at all like an echo chamber.
posted by muddgirl at 9:55 AM on April 15, 2015 [2 favorites]
To me, the reason I disagreed with the conclusion is not because I have an irrational, unsubstantiated, and unscientific belief in sexism. It's because the results don't disagree with my model of human behavior or my lived-with experiences of sexism, but the conclusion does. Other members of the Mefi echo chamber, right here in this thread, have other models and other lived experiences, and have presented alternative reasonings. Overall it's been a pretty varied discussion, not at all like an echo chamber.
posted by muddgirl at 9:55 AM on April 15, 2015 [2 favorites]
To be honest, MisantropicPainforest, this discussion would be a lot less like an echo chamber if it didn't seem like your only purpose for being in the thread was to call people stupid or biased. There are points all over the place that you haven't engaged with, opting for drive-by sniping instead. If you want to discuss this with people, we're here. If we're too useless to engage with on this topic, why bother?
posted by howfar at 12:56 PM on April 15, 2015
posted by howfar at 12:56 PM on April 15, 2015
I've defended this study from silly criticisms, and that makes it less like an echo chamber? How?
What points do you want me to engage with that I haven't?
posted by MisantropicPainforest at 1:00 PM on April 15, 2015
What points do you want me to engage with that I haven't?
posted by MisantropicPainforest at 1:00 PM on April 15, 2015
Also please tell me where I called anyone biased.
posted by MisantropicPainforest at 1:20 PM on April 15, 2015
posted by MisantropicPainforest at 1:20 PM on April 15, 2015
Personally, I've got a question. I'm a theoretical physicist, so I don't understand your comment in response here. And I don't really appreciate you calling my critique silly, either.
Previous question:
Why not look at some actual real world data about actual actions, compare shortlists to hires, and see if the bias persists?
Your response:
Because there are a whole lot of endogeneity problems with observational data on this topic.
Please explain further, that's a bit short. Data is problematic and may have bad features. (FWIW I also don't really understand why this data is particularly prone to endogeneity.. but that's not actually my major point.)
You haven't explained at all why studying actual hiring data, particularly if you are aware of its nature and the possible bad features it may contain, would be in any worse than studying people's self-reported behavior. There's plenty of research (a random google gives this page with some references to studies on accuracy of self-reporting at the bottom, if you actually work with social science data of this nature I'm sure you have many more references yourself) indicating it's rather problematic.
I would strongly prefer a study method that is based on real world data. And actually when I did my quick check above, I was pretty heartened to learn the numbers for my field weren't terribly far off of expected representation (following from PhDs onwards). I didn't get hired, I have dealt with some shit due to my gender, and so it's actually nice for me to see that on some rough level, It's not like I'm not getting hired because of my gender.
I don't know what a real-world data study would show-- but I know that whatever it were to show, I'd trust it more than I do this self-report based one.
posted by nat at 4:03 PM on April 15, 2015 [1 favorite]
Previous question:
Why not look at some actual real world data about actual actions, compare shortlists to hires, and see if the bias persists?
Your response:
Because there are a whole lot of endogeneity problems with observational data on this topic.
Please explain further, that's a bit short. Data is problematic and may have bad features. (FWIW I also don't really understand why this data is particularly prone to endogeneity.. but that's not actually my major point.)
You haven't explained at all why studying actual hiring data, particularly if you are aware of its nature and the possible bad features it may contain, would be in any worse than studying people's self-reported behavior. There's plenty of research (a random google gives this page with some references to studies on accuracy of self-reporting at the bottom, if you actually work with social science data of this nature I'm sure you have many more references yourself) indicating it's rather problematic.
I would strongly prefer a study method that is based on real world data. And actually when I did my quick check above, I was pretty heartened to learn the numbers for my field weren't terribly far off of expected representation (following from PhDs onwards). I didn't get hired, I have dealt with some shit due to my gender, and so it's actually nice for me to see that on some rough level, It's not like I'm not getting hired because of my gender.
I don't know what a real-world data study would show-- but I know that whatever it were to show, I'd trust it more than I do this self-report based one.
posted by nat at 4:03 PM on April 15, 2015 [1 favorite]
Data isn't endogenous, independent and dependent variables are. Many times real world data cannot find out what you want it to find out. Experiments solve endogeneity problems but create external validity ones.
Lets say you look at the real world data. What would you find that would strengthen or weaken your theory?
Gender (dis)parity in hiring? Well no, because it could be that men/women are better/lured away from academia/quit academia etc. What if you condition on a sufficient amount of variables? Well throwing in more variables eats power and runs the risk of endogenous selection bias. Anyway, what you are really trying to do in this situation is ask the question:
If you have two candiates, A and B, and they are exactly the same, except A is a man and B is a woman, then what differences do they experience in hiring?
The problem is, in the real world, that situation does not exist, ever. There are no two identical candidates that differ only in being a man/woman. Experiments can create that scenario, but obviously run the risk of not accurately mimicing the real world.
Moreover, this doesn't run into a lot of the self reported problems of surveys (it is a field experiment and not a survey experiment) about sensitive questions, because the respondents do not know what the treatment is. Its not like the experimenters sent them a male applicant, asked them for their opinion, then said, "actually its a woman!", and their response changed.
FTA:
We could not simply send faculty members two iden-
tical candidate descriptions differing only in gender and ask which
person the faculty member preferred to hire. Such a transparent
approach would have revealed our central question and compro-
mised the results.
The paper also addresses the problem that the respondants do indeed know what the treatment is, and choose to express a preference for women because society tells them they should, when in reality they don't share those preferences. They re-do one of their experiments with just one applicant. And what do they find?
The existenceof a preference for women when faculty rate only one applicant
suggests that norms and values associated with gender diversity
have become internalized in the population of US faculty.
posted by MisantropicPainforest at 4:31 PM on April 15, 2015 [2 favorites]
Lets say you look at the real world data. What would you find that would strengthen or weaken your theory?
Gender (dis)parity in hiring? Well no, because it could be that men/women are better/lured away from academia/quit academia etc. What if you condition on a sufficient amount of variables? Well throwing in more variables eats power and runs the risk of endogenous selection bias. Anyway, what you are really trying to do in this situation is ask the question:
If you have two candiates, A and B, and they are exactly the same, except A is a man and B is a woman, then what differences do they experience in hiring?
The problem is, in the real world, that situation does not exist, ever. There are no two identical candidates that differ only in being a man/woman. Experiments can create that scenario, but obviously run the risk of not accurately mimicing the real world.
Moreover, this doesn't run into a lot of the self reported problems of surveys (it is a field experiment and not a survey experiment) about sensitive questions, because the respondents do not know what the treatment is. Its not like the experimenters sent them a male applicant, asked them for their opinion, then said, "actually its a woman!", and their response changed.
FTA:
We could not simply send faculty members two iden-
tical candidate descriptions differing only in gender and ask which
person the faculty member preferred to hire. Such a transparent
approach would have revealed our central question and compro-
mised the results.
The paper also addresses the problem that the respondants do indeed know what the treatment is, and choose to express a preference for women because society tells them they should, when in reality they don't share those preferences. They re-do one of their experiments with just one applicant. And what do they find?
The existenceof a preference for women when faculty rate only one applicant
suggests that norms and values associated with gender diversity
have become internalized in the population of US faculty.
posted by MisantropicPainforest at 4:31 PM on April 15, 2015 [2 favorites]
Yeah, I'd agree that redoing the experiment with a single applicant helps address the concern that respondents recognize the treatment. And they'd land the opposite result if they'd ran this in the 70s. I've no desire to criticize this paper beyond my earlier mocking of their "propitious time" over statement, and I'm sympathetic with the final sentence following it, but..
There is an established difference in behavior depending upon immediacy. Imagine you're ordering groceries online. If delivery is next week, then you'll order a nice healthy salad fixings. If it's in 10 min, then you'll order chips and salsa. It's possible a faculty member might "internalize [the] norms and values associated with gender diversity" but act contrary to them under pressure. We've no evidence of this, well faculty members in this thread indicate the opposite, but it's possible.
As I mentioned upthread, there is no hiring phase where only the CVs get considered, because faculty members can immediately examine the recommendations if they find a CV interesting. It's possible faculty employ different langauge when describing their male and female students. I'd wager parents do that still, of course professionalism should mute that, but still.
An interested party could try convincing the AMS to let them run a textual analysis on the letters of recommendation stored in their mathjobs.org site. In fact, they could break it down not only by by applicant gender, but by year over the last decade, and by recommendation writer age and gender.
Finally, there are similar empirical studies in the social sciences that claim gender discrimination varies depending upon the candidates perceived level. In other words, these findings might apply more to research institutions and less to teaching oriented collages. And gender might influence student teaching evaluations and TA teaching awards.
There is no reason this particular study should've looked at the above concerns because their results are already quite interesting. Just say'n.
posted by jeffburdges at 9:47 PM on April 15, 2015
There is an established difference in behavior depending upon immediacy. Imagine you're ordering groceries online. If delivery is next week, then you'll order a nice healthy salad fixings. If it's in 10 min, then you'll order chips and salsa. It's possible a faculty member might "internalize [the] norms and values associated with gender diversity" but act contrary to them under pressure. We've no evidence of this, well faculty members in this thread indicate the opposite, but it's possible.
As I mentioned upthread, there is no hiring phase where only the CVs get considered, because faculty members can immediately examine the recommendations if they find a CV interesting. It's possible faculty employ different langauge when describing their male and female students. I'd wager parents do that still, of course professionalism should mute that, but still.
An interested party could try convincing the AMS to let them run a textual analysis on the letters of recommendation stored in their mathjobs.org site. In fact, they could break it down not only by by applicant gender, but by year over the last decade, and by recommendation writer age and gender.
Finally, there are similar empirical studies in the social sciences that claim gender discrimination varies depending upon the candidates perceived level. In other words, these findings might apply more to research institutions and less to teaching oriented collages. And gender might influence student teaching evaluations and TA teaching awards.
There is no reason this particular study should've looked at the above concerns because their results are already quite interesting. Just say'n.
posted by jeffburdges at 9:47 PM on April 15, 2015
Hey - someone did your study on letters of reference and gender for you.
Recommenders used significantly more standout adjectives to describe male candidates as compared to female candidates, even though objective criteria showed no gender differences in qualifications. It is likely that evaluators place higher weight on letters that describe a candidate as the most gifted, best qualified, or a rising star. This could mean that even a small difference in the proportion of standout adjectives used in describing female candidates could translate into much larger evaluative effects.And another one, this one looking at medical school faculty (pdf):
This study examines over 300 letters of recommendation for medical faculty at a large American medical school in the mid-1990s, using methods from corpus and discourse analysis, with the theoretical perspective of gender schema from cognitive psychology. Letters written for female applicants were found to differ systematically from those written for male applicants in the extremes of length, in the percentages lacking in basic features, in the percentages with doubt raisers (an extended category of negative language, often associated with apparent commendation), and in frequency of mention of status terms. Further, the most common semantically grouped possessive phrases referring to female and male applicants (‘her teaching,’ ‘his research’) reinforce gender schema that tend to portray women as teachers and students, and men as researchers and professionals.posted by ChuraChura at 5:51 AM on April 16, 2015 [9 favorites]
Williams and Ceci’s data show that, amongst their sample, women and male faculty say they would not discriminate against a woman candidate for a tenure-track position at a university. Sounds great, right? The problem is the discrepancy between their study design, that elicits hypothetical responses to hypothetical candidates in a manner that is nothing like real-world hiring conditions, and the researchers’ conclusions, which is that this hypothetical setting dispels the “myth” that women are disadvantaged in academic hiring. The background to this problem of inequality is that this is not a myth at all: a plethora of robust empirical research already shows that, not only are there less women in STEM fields, but that women are less likely to be hired for STEM jobs, as well as promoted, remunerated and professionally recognised in every respect of academic life.posted by ChuraChura at 9:43 AM on April 16, 2015 [8 favorites]
This kind of study is incredibly frustrating because it tells a story many of us would love to believe: that gender discrimination in academia is dying. Williams and Ceci rightly note that there are serious cultural biases pushing girls and women away from STEM, especially the more math-oriented fields. Those biases stymie female ambition long before women would be in a position to apply for university jobs. In physics, for instance, newly minted female job candidates are hired in rough proportion to the number of Ph.D. degrees handed out to women—stats that square with the rosy picture painted in Williams and Ceci’s paper. But women aren’t promoted to higher levels in the same proportion. According to the most recent American Institute of Physics analysis, women constituted just 18 percent of all physics and astronomy university faculty in 2010, with that proportion dropping with each higher rank. Given the power that senior faculty members have over promotion within their departments, simply hiring more women isn’t solving anything: They aren’t getting tenure, they aren’t staying, and older men still continue to control departments.posted by ChuraChura at 2:25 PM on April 20, 2015 [3 favorites]
The Big Lie of Science is that it doesn’t matter who does the science, as long as the research is sound. The truth is that scientists judge each other’s work through their own prejudices, and the Lie lets them get away with it. The Lie lets people remain silent when they see their colleagues being mistreated, because “personality shouldn’t matter”.and
After submitting a scientific manuscript for peer review -- the process by which scientists uninvolved in a study will decide whether it's fit to publish -- two female researchers got a nasty shock: The sole review attached to their rejected study suggested that bringing some men into their team might fix all its problems.posted by ChuraChura at 3:17 PM on April 30, 2015 [5 favorites]
“It would probably … be beneficial to find one or two male biologists to work with (or at least obtain internal peer review from, but better yet as active co-authors)” to prevent the manuscript from “drifting too far away from empirical evidence into ideologically biased assumptions,” the reviewer wrote in one portion.
“Perhaps it is not so surprising that on average male doctoral students co-author one more paper than female doctoral students, just as, on average, male doctoral students can probably run a mile a bit faster than female doctoral students,” added the reviewer (whose gender is not known).
Thanks for posting that, ChuraChura, that peer review story made me so mad I could hardly see straight. It's one thing for some awful sad person to write that terrible review, but it's another thing entirely for the journal to send it back as the only feedback with the rejection of their submission. Absolutely shameful on all counts.
posted by dialetheia at 9:54 PM on April 30, 2015 [1 favorite]
posted by dialetheia at 9:54 PM on April 30, 2015 [1 favorite]
Come on, y'all. The study in the FPP found that sexism is no longer a problem in science. Clearly, anyone who claims otherwise has an ideological agenda and requires a male co-author on their paper. Because this one time, one study found that sexism is not longer a problem in science.
I saw the new Avengers last night, and it reminded me of just how awesome FEMINIST HULK is. Because sometimes, the bullshit that we have to put up with does not merit reasonable careful discussion. Sometimes, SMASH.
posted by hydropsyche at 5:30 AM on May 1, 2015 [3 favorites]
I saw the new Avengers last night, and it reminded me of just how awesome FEMINIST HULK is. Because sometimes, the bullshit that we have to put up with does not merit reasonable careful discussion. Sometimes, SMASH.
posted by hydropsyche at 5:30 AM on May 1, 2015 [3 favorites]
« Older When Is Cheryl’s Birthday? | Salt in soil from bygone era may be keeping briney... Newer »
This thread has been archived and is closed to new comments
posted by oneswellfoop at 10:51 AM on April 14, 2015