A social experiment. What is the worst that can happen?
I am a postdoc and I have been applying for jobs in both industry and academia. My h-index is good enough for junior faculty (~7).
Along my academic CV I have an industry-oriented CV and send both of them out accordingly. I have had a handful of final stage interviews (faculty/scientist) for academia but none in the industry so far.
I suspect I am being interviewed as the "token diverse female" (I'm asian) as my area of science is white male dominated. The whole experience, along with prior job hunts, has led me to suspect that my gender and race may be hindering my earning potential. I believe I have the required qualifications and skills.
I am thinking of reapplying as a white male to these same industry jobs I got rejected for (especially the rejections without interview) just to see if how far along I would get. Only for industry jobs because those CVs don't make it to the chief scientist's table. Maybe make a documentary or blog about this if there are significant findings. Now put your imagination to the test: what is the worst that can happen?
postdocs job-search job gender ethnicity
add a comment |
I am a postdoc and I have been applying for jobs in both industry and academia. My h-index is good enough for junior faculty (~7).
Along my academic CV I have an industry-oriented CV and send both of them out accordingly. I have had a handful of final stage interviews (faculty/scientist) for academia but none in the industry so far.
I suspect I am being interviewed as the "token diverse female" (I'm asian) as my area of science is white male dominated. The whole experience, along with prior job hunts, has led me to suspect that my gender and race may be hindering my earning potential. I believe I have the required qualifications and skills.
I am thinking of reapplying as a white male to these same industry jobs I got rejected for (especially the rejections without interview) just to see if how far along I would get. Only for industry jobs because those CVs don't make it to the chief scientist's table. Maybe make a documentary or blog about this if there are significant findings. Now put your imagination to the test: what is the worst that can happen?
postdocs job-search job gender ethnicity
Comments are not for extended discussion; this conversation has been moved to chat.
– StrongBad♦
5 hours ago
2
Hiring biases in industry are off-topic.
– Azor Ahai
3 hours ago
add a comment |
I am a postdoc and I have been applying for jobs in both industry and academia. My h-index is good enough for junior faculty (~7).
Along my academic CV I have an industry-oriented CV and send both of them out accordingly. I have had a handful of final stage interviews (faculty/scientist) for academia but none in the industry so far.
I suspect I am being interviewed as the "token diverse female" (I'm asian) as my area of science is white male dominated. The whole experience, along with prior job hunts, has led me to suspect that my gender and race may be hindering my earning potential. I believe I have the required qualifications and skills.
I am thinking of reapplying as a white male to these same industry jobs I got rejected for (especially the rejections without interview) just to see if how far along I would get. Only for industry jobs because those CVs don't make it to the chief scientist's table. Maybe make a documentary or blog about this if there are significant findings. Now put your imagination to the test: what is the worst that can happen?
postdocs job-search job gender ethnicity
I am a postdoc and I have been applying for jobs in both industry and academia. My h-index is good enough for junior faculty (~7).
Along my academic CV I have an industry-oriented CV and send both of them out accordingly. I have had a handful of final stage interviews (faculty/scientist) for academia but none in the industry so far.
I suspect I am being interviewed as the "token diverse female" (I'm asian) as my area of science is white male dominated. The whole experience, along with prior job hunts, has led me to suspect that my gender and race may be hindering my earning potential. I believe I have the required qualifications and skills.
I am thinking of reapplying as a white male to these same industry jobs I got rejected for (especially the rejections without interview) just to see if how far along I would get. Only for industry jobs because those CVs don't make it to the chief scientist's table. Maybe make a documentary or blog about this if there are significant findings. Now put your imagination to the test: what is the worst that can happen?
postdocs job-search job gender ethnicity
postdocs job-search job gender ethnicity
edited yesterday
kubanczyk
1032
1032
asked 2 days ago
FrostedCentralFrostedCentral
368248
368248
Comments are not for extended discussion; this conversation has been moved to chat.
– StrongBad♦
5 hours ago
2
Hiring biases in industry are off-topic.
– Azor Ahai
3 hours ago
add a comment |
Comments are not for extended discussion; this conversation has been moved to chat.
– StrongBad♦
5 hours ago
2
Hiring biases in industry are off-topic.
– Azor Ahai
3 hours ago
Comments are not for extended discussion; this conversation has been moved to chat.
– StrongBad♦
5 hours ago
Comments are not for extended discussion; this conversation has been moved to chat.
– StrongBad♦
5 hours ago
2
2
Hiring biases in industry are off-topic.
– Azor Ahai
3 hours ago
Hiring biases in industry are off-topic.
– Azor Ahai
3 hours ago
add a comment |
3 Answers
3
active
oldest
votes
I like to think I have a pretty vivid imagination, so the worst thing I can reasonably imagine happening is that you would be seen as trying to perform an experiment with human participants without proper controls, questionable experimental design, possibly a lack of appropriately rigorous analysis (if this isn't your specialty), and lack of ethical review and oversight. From an ethics perspective, you propose to use deception on uninformed participants while possibly obtaining personally identifiable information on them which could lead to them facing serious social consequences if they were identified. With the mob mentality of the viral internet as it is, the consequences could be pretty darn severe - up to loss of job, death threats, actual violence, and more. Sensitive topics require sensitive handling, which is what any functioning IRB and responsible researcher would insist on.
In reporting on the results, even if informally on a blog, you could be seen as offering a pseudo-scientific view (or worse) on a controversial and important topic. If your experiment was performed poorly or your analysis done improperly, you could lend weight to an incorrect view - either providing what could be cited as evidence that discrimination does not exist where it does (thus making it harder for people discriminated against to make changes or be taken seriously), or supporting the view that discrimination does exist where it doesn't (leading to negative consequences for people who are doing nothing wrong and deflecting attention away from more pressing, extant issues). Bad science, even done with good intention, can easily make the world worse.
If some random blogger or journalist did this, most of us can gratingly dismiss it as "they don't know any better", or just the world of click-bait, etc. But if you were a qualified scientist who should know better, people might not be so willing to dismiss such activity just because it wasn't intended to be scientific and it wasn't intended for publication. Most people won't even know the difference in what is and isn't intended to be scientific when done by a scientist, and many that do know the difference might not consider it an excuse.
As Dawn pointed out in a comment, one version of this is called an audit study, and there is a pretty large body of literature that tries to do basically what you are suggesting in a systematic way. I cannot even try to count how many studies of this sort are published, but I'd be surprised if it wasn't already in the thousands, looking at everything from gender to race to the impact of varying lengths of time gaps on a resume.
Finally, the nature of this sort of field study is that even with everything going your way they are hard to do correctly. No simple analysis method works even if you did everything right and collected all the data appropriately. There is too much randomness, too much heterogeneity, too much structure, to allow any simple bit of statistics to give the correct interpretation. In short, unless this is your specialty, it would be trivially easy to get everything else right and still come to exactly the wrong conclusion.
For those who are not familiar with this kind of statistics, a classic example of how a simple analysis can go wrong is Sex Bias in Graduate Admissions: Data from Berkeley. In a simple aggregate analysis, it looked quite clearly that women were being admitted at lower rates than men, and thus bias was quite obvious to the point that the deans of the school were concerned this could be the basis for a lawsuit. It turned out it was a nice example of Simpson's Paradox, as it turned out the cause for the difference was that women were more likely to apply to departments that were crowded and competitive and thus harder to get into for everyone, while men were more likely to apply to departments that were less competitive.
If a similar condition existed in the employment sector, where you were applying to jobs in industry that turned out to vary in their selectivity in a way that you were not considering, this would mess up your analysis, and you cannot easily collect more information that would allow you to fix it. After all, I'm sure you weren't inclined to use random selection in your own employment search!
So, in summation, the worst that I could imagine happening is: you end up doing bad science that would reflect badly on you and would not be easily excused just because it wasn't intended for publication; you come to the wrong conclusions and in a way which could hurt innocent people; you casually report information that could be used in dangerous and damaging way; you end up being identified as the person responsible and it goes viral, so now the most famous thing you'll ever be known for was this thing you didn't intend as a serious study (and which could have gone horribly wrong); and, as a bonus, you could just end up wasting your time and the time of others for no benefit.
And since its the worst thing that can happen, I suppose you could also end up with a headache. Things can always be worse by adding a headache.
11
The question was "what's the worst that can happen", so this answer rightly applies a strict standard. But we don't generally demand scientific rigour from blog posts, journalism, or political advocacy. For example, interviewing an arbitrary sample of people in the streets is not a representative opinion poll, yet it is common practice in TV and newspaper reporting.
– henning
yesterday
4
@henning those organization do pay a price for their lack of rigor. In lost reputation as well as in lawsuits.
– A Simple Algorithm
yesterday
6
@A Simple Algorithm Not really.
– henning
yesterday
4
Oh, hey. It can get worse. Some SJW who is unstable could read your research, get offended, and decide to assassinate the presidents of the institutions you contacted, "just to teach them a lesson".
– James Martin
8 hours ago
@JamesMartin, You are correct. Assassination is one example of "actual violence" as mentioned in the first paragraph of the answer.
– Kyle A
7 hours ago
add a comment |
You're asking the wrong question.
... what is the worst that can happen?
Others have answered this. But it's the wrong question. What you should really ask is:
What's likely to happen?
You are likely to not get any statistically significant information, and probably not even a sound hunch about the reason you got more offers than your friend. You are likely to get into a mild amount of trouble at least with some of the potential workplaces when you retract your application or when they call up your references/former universities/etc. IMHO it is likely you will not have contributed even marginally with your experiment.
If you believe "affirmative-action"-type hiring is nothing but Tokenism and is inappropriate / discriminatory against people like your friend - act against it where you actually are present and have access to information, such as your next workplace; and in wider social contexts (e.g. participate in public awareness-raising campaigns, lobbying elected officials, organizing petitions, demonstrations etc.)
PS - Please do not construe this answer as an endorsement or criticism of "affirmative-action"-type hiring practices.
add a comment |
The worst thing which can happen is that you get the study setup or the statistics wrong, and the interpretation in the blog. The world is too full of oversimplified blogs and video documentaries on perceived gender biases.
There is a lot of research by people who devote their scientific career to this, and the statistics of "who studies what" is quite influential. For example I (as a technical team lead doing a lot of technical interviews) observe that most women who started to study engineering 10 years ago (Central Europe) did think about the study choice well before studying, while for many men it was "the default option". This alone makes it reasonable to select them for the interviews with a higher percentage. I cant quantify this -> read research on it.
In the worst case (i assume that you are in a STEM field) it could cost you job opportunities to present a flawed analysis without peer-review.
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "415"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2facademia.stackexchange.com%2fquestions%2f126930%2fa-social-experiment-what-is-the-worst-that-can-happen%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
I like to think I have a pretty vivid imagination, so the worst thing I can reasonably imagine happening is that you would be seen as trying to perform an experiment with human participants without proper controls, questionable experimental design, possibly a lack of appropriately rigorous analysis (if this isn't your specialty), and lack of ethical review and oversight. From an ethics perspective, you propose to use deception on uninformed participants while possibly obtaining personally identifiable information on them which could lead to them facing serious social consequences if they were identified. With the mob mentality of the viral internet as it is, the consequences could be pretty darn severe - up to loss of job, death threats, actual violence, and more. Sensitive topics require sensitive handling, which is what any functioning IRB and responsible researcher would insist on.
In reporting on the results, even if informally on a blog, you could be seen as offering a pseudo-scientific view (or worse) on a controversial and important topic. If your experiment was performed poorly or your analysis done improperly, you could lend weight to an incorrect view - either providing what could be cited as evidence that discrimination does not exist where it does (thus making it harder for people discriminated against to make changes or be taken seriously), or supporting the view that discrimination does exist where it doesn't (leading to negative consequences for people who are doing nothing wrong and deflecting attention away from more pressing, extant issues). Bad science, even done with good intention, can easily make the world worse.
If some random blogger or journalist did this, most of us can gratingly dismiss it as "they don't know any better", or just the world of click-bait, etc. But if you were a qualified scientist who should know better, people might not be so willing to dismiss such activity just because it wasn't intended to be scientific and it wasn't intended for publication. Most people won't even know the difference in what is and isn't intended to be scientific when done by a scientist, and many that do know the difference might not consider it an excuse.
As Dawn pointed out in a comment, one version of this is called an audit study, and there is a pretty large body of literature that tries to do basically what you are suggesting in a systematic way. I cannot even try to count how many studies of this sort are published, but I'd be surprised if it wasn't already in the thousands, looking at everything from gender to race to the impact of varying lengths of time gaps on a resume.
Finally, the nature of this sort of field study is that even with everything going your way they are hard to do correctly. No simple analysis method works even if you did everything right and collected all the data appropriately. There is too much randomness, too much heterogeneity, too much structure, to allow any simple bit of statistics to give the correct interpretation. In short, unless this is your specialty, it would be trivially easy to get everything else right and still come to exactly the wrong conclusion.
For those who are not familiar with this kind of statistics, a classic example of how a simple analysis can go wrong is Sex Bias in Graduate Admissions: Data from Berkeley. In a simple aggregate analysis, it looked quite clearly that women were being admitted at lower rates than men, and thus bias was quite obvious to the point that the deans of the school were concerned this could be the basis for a lawsuit. It turned out it was a nice example of Simpson's Paradox, as it turned out the cause for the difference was that women were more likely to apply to departments that were crowded and competitive and thus harder to get into for everyone, while men were more likely to apply to departments that were less competitive.
If a similar condition existed in the employment sector, where you were applying to jobs in industry that turned out to vary in their selectivity in a way that you were not considering, this would mess up your analysis, and you cannot easily collect more information that would allow you to fix it. After all, I'm sure you weren't inclined to use random selection in your own employment search!
So, in summation, the worst that I could imagine happening is: you end up doing bad science that would reflect badly on you and would not be easily excused just because it wasn't intended for publication; you come to the wrong conclusions and in a way which could hurt innocent people; you casually report information that could be used in dangerous and damaging way; you end up being identified as the person responsible and it goes viral, so now the most famous thing you'll ever be known for was this thing you didn't intend as a serious study (and which could have gone horribly wrong); and, as a bonus, you could just end up wasting your time and the time of others for no benefit.
And since its the worst thing that can happen, I suppose you could also end up with a headache. Things can always be worse by adding a headache.
11
The question was "what's the worst that can happen", so this answer rightly applies a strict standard. But we don't generally demand scientific rigour from blog posts, journalism, or political advocacy. For example, interviewing an arbitrary sample of people in the streets is not a representative opinion poll, yet it is common practice in TV and newspaper reporting.
– henning
yesterday
4
@henning those organization do pay a price for their lack of rigor. In lost reputation as well as in lawsuits.
– A Simple Algorithm
yesterday
6
@A Simple Algorithm Not really.
– henning
yesterday
4
Oh, hey. It can get worse. Some SJW who is unstable could read your research, get offended, and decide to assassinate the presidents of the institutions you contacted, "just to teach them a lesson".
– James Martin
8 hours ago
@JamesMartin, You are correct. Assassination is one example of "actual violence" as mentioned in the first paragraph of the answer.
– Kyle A
7 hours ago
add a comment |
I like to think I have a pretty vivid imagination, so the worst thing I can reasonably imagine happening is that you would be seen as trying to perform an experiment with human participants without proper controls, questionable experimental design, possibly a lack of appropriately rigorous analysis (if this isn't your specialty), and lack of ethical review and oversight. From an ethics perspective, you propose to use deception on uninformed participants while possibly obtaining personally identifiable information on them which could lead to them facing serious social consequences if they were identified. With the mob mentality of the viral internet as it is, the consequences could be pretty darn severe - up to loss of job, death threats, actual violence, and more. Sensitive topics require sensitive handling, which is what any functioning IRB and responsible researcher would insist on.
In reporting on the results, even if informally on a blog, you could be seen as offering a pseudo-scientific view (or worse) on a controversial and important topic. If your experiment was performed poorly or your analysis done improperly, you could lend weight to an incorrect view - either providing what could be cited as evidence that discrimination does not exist where it does (thus making it harder for people discriminated against to make changes or be taken seriously), or supporting the view that discrimination does exist where it doesn't (leading to negative consequences for people who are doing nothing wrong and deflecting attention away from more pressing, extant issues). Bad science, even done with good intention, can easily make the world worse.
If some random blogger or journalist did this, most of us can gratingly dismiss it as "they don't know any better", or just the world of click-bait, etc. But if you were a qualified scientist who should know better, people might not be so willing to dismiss such activity just because it wasn't intended to be scientific and it wasn't intended for publication. Most people won't even know the difference in what is and isn't intended to be scientific when done by a scientist, and many that do know the difference might not consider it an excuse.
As Dawn pointed out in a comment, one version of this is called an audit study, and there is a pretty large body of literature that tries to do basically what you are suggesting in a systematic way. I cannot even try to count how many studies of this sort are published, but I'd be surprised if it wasn't already in the thousands, looking at everything from gender to race to the impact of varying lengths of time gaps on a resume.
Finally, the nature of this sort of field study is that even with everything going your way they are hard to do correctly. No simple analysis method works even if you did everything right and collected all the data appropriately. There is too much randomness, too much heterogeneity, too much structure, to allow any simple bit of statistics to give the correct interpretation. In short, unless this is your specialty, it would be trivially easy to get everything else right and still come to exactly the wrong conclusion.
For those who are not familiar with this kind of statistics, a classic example of how a simple analysis can go wrong is Sex Bias in Graduate Admissions: Data from Berkeley. In a simple aggregate analysis, it looked quite clearly that women were being admitted at lower rates than men, and thus bias was quite obvious to the point that the deans of the school were concerned this could be the basis for a lawsuit. It turned out it was a nice example of Simpson's Paradox, as it turned out the cause for the difference was that women were more likely to apply to departments that were crowded and competitive and thus harder to get into for everyone, while men were more likely to apply to departments that were less competitive.
If a similar condition existed in the employment sector, where you were applying to jobs in industry that turned out to vary in their selectivity in a way that you were not considering, this would mess up your analysis, and you cannot easily collect more information that would allow you to fix it. After all, I'm sure you weren't inclined to use random selection in your own employment search!
So, in summation, the worst that I could imagine happening is: you end up doing bad science that would reflect badly on you and would not be easily excused just because it wasn't intended for publication; you come to the wrong conclusions and in a way which could hurt innocent people; you casually report information that could be used in dangerous and damaging way; you end up being identified as the person responsible and it goes viral, so now the most famous thing you'll ever be known for was this thing you didn't intend as a serious study (and which could have gone horribly wrong); and, as a bonus, you could just end up wasting your time and the time of others for no benefit.
And since its the worst thing that can happen, I suppose you could also end up with a headache. Things can always be worse by adding a headache.
11
The question was "what's the worst that can happen", so this answer rightly applies a strict standard. But we don't generally demand scientific rigour from blog posts, journalism, or political advocacy. For example, interviewing an arbitrary sample of people in the streets is not a representative opinion poll, yet it is common practice in TV and newspaper reporting.
– henning
yesterday
4
@henning those organization do pay a price for their lack of rigor. In lost reputation as well as in lawsuits.
– A Simple Algorithm
yesterday
6
@A Simple Algorithm Not really.
– henning
yesterday
4
Oh, hey. It can get worse. Some SJW who is unstable could read your research, get offended, and decide to assassinate the presidents of the institutions you contacted, "just to teach them a lesson".
– James Martin
8 hours ago
@JamesMartin, You are correct. Assassination is one example of "actual violence" as mentioned in the first paragraph of the answer.
– Kyle A
7 hours ago
add a comment |
I like to think I have a pretty vivid imagination, so the worst thing I can reasonably imagine happening is that you would be seen as trying to perform an experiment with human participants without proper controls, questionable experimental design, possibly a lack of appropriately rigorous analysis (if this isn't your specialty), and lack of ethical review and oversight. From an ethics perspective, you propose to use deception on uninformed participants while possibly obtaining personally identifiable information on them which could lead to them facing serious social consequences if they were identified. With the mob mentality of the viral internet as it is, the consequences could be pretty darn severe - up to loss of job, death threats, actual violence, and more. Sensitive topics require sensitive handling, which is what any functioning IRB and responsible researcher would insist on.
In reporting on the results, even if informally on a blog, you could be seen as offering a pseudo-scientific view (or worse) on a controversial and important topic. If your experiment was performed poorly or your analysis done improperly, you could lend weight to an incorrect view - either providing what could be cited as evidence that discrimination does not exist where it does (thus making it harder for people discriminated against to make changes or be taken seriously), or supporting the view that discrimination does exist where it doesn't (leading to negative consequences for people who are doing nothing wrong and deflecting attention away from more pressing, extant issues). Bad science, even done with good intention, can easily make the world worse.
If some random blogger or journalist did this, most of us can gratingly dismiss it as "they don't know any better", or just the world of click-bait, etc. But if you were a qualified scientist who should know better, people might not be so willing to dismiss such activity just because it wasn't intended to be scientific and it wasn't intended for publication. Most people won't even know the difference in what is and isn't intended to be scientific when done by a scientist, and many that do know the difference might not consider it an excuse.
As Dawn pointed out in a comment, one version of this is called an audit study, and there is a pretty large body of literature that tries to do basically what you are suggesting in a systematic way. I cannot even try to count how many studies of this sort are published, but I'd be surprised if it wasn't already in the thousands, looking at everything from gender to race to the impact of varying lengths of time gaps on a resume.
Finally, the nature of this sort of field study is that even with everything going your way they are hard to do correctly. No simple analysis method works even if you did everything right and collected all the data appropriately. There is too much randomness, too much heterogeneity, too much structure, to allow any simple bit of statistics to give the correct interpretation. In short, unless this is your specialty, it would be trivially easy to get everything else right and still come to exactly the wrong conclusion.
For those who are not familiar with this kind of statistics, a classic example of how a simple analysis can go wrong is Sex Bias in Graduate Admissions: Data from Berkeley. In a simple aggregate analysis, it looked quite clearly that women were being admitted at lower rates than men, and thus bias was quite obvious to the point that the deans of the school were concerned this could be the basis for a lawsuit. It turned out it was a nice example of Simpson's Paradox, as it turned out the cause for the difference was that women were more likely to apply to departments that were crowded and competitive and thus harder to get into for everyone, while men were more likely to apply to departments that were less competitive.
If a similar condition existed in the employment sector, where you were applying to jobs in industry that turned out to vary in their selectivity in a way that you were not considering, this would mess up your analysis, and you cannot easily collect more information that would allow you to fix it. After all, I'm sure you weren't inclined to use random selection in your own employment search!
So, in summation, the worst that I could imagine happening is: you end up doing bad science that would reflect badly on you and would not be easily excused just because it wasn't intended for publication; you come to the wrong conclusions and in a way which could hurt innocent people; you casually report information that could be used in dangerous and damaging way; you end up being identified as the person responsible and it goes viral, so now the most famous thing you'll ever be known for was this thing you didn't intend as a serious study (and which could have gone horribly wrong); and, as a bonus, you could just end up wasting your time and the time of others for no benefit.
And since its the worst thing that can happen, I suppose you could also end up with a headache. Things can always be worse by adding a headache.
I like to think I have a pretty vivid imagination, so the worst thing I can reasonably imagine happening is that you would be seen as trying to perform an experiment with human participants without proper controls, questionable experimental design, possibly a lack of appropriately rigorous analysis (if this isn't your specialty), and lack of ethical review and oversight. From an ethics perspective, you propose to use deception on uninformed participants while possibly obtaining personally identifiable information on them which could lead to them facing serious social consequences if they were identified. With the mob mentality of the viral internet as it is, the consequences could be pretty darn severe - up to loss of job, death threats, actual violence, and more. Sensitive topics require sensitive handling, which is what any functioning IRB and responsible researcher would insist on.
In reporting on the results, even if informally on a blog, you could be seen as offering a pseudo-scientific view (or worse) on a controversial and important topic. If your experiment was performed poorly or your analysis done improperly, you could lend weight to an incorrect view - either providing what could be cited as evidence that discrimination does not exist where it does (thus making it harder for people discriminated against to make changes or be taken seriously), or supporting the view that discrimination does exist where it doesn't (leading to negative consequences for people who are doing nothing wrong and deflecting attention away from more pressing, extant issues). Bad science, even done with good intention, can easily make the world worse.
If some random blogger or journalist did this, most of us can gratingly dismiss it as "they don't know any better", or just the world of click-bait, etc. But if you were a qualified scientist who should know better, people might not be so willing to dismiss such activity just because it wasn't intended to be scientific and it wasn't intended for publication. Most people won't even know the difference in what is and isn't intended to be scientific when done by a scientist, and many that do know the difference might not consider it an excuse.
As Dawn pointed out in a comment, one version of this is called an audit study, and there is a pretty large body of literature that tries to do basically what you are suggesting in a systematic way. I cannot even try to count how many studies of this sort are published, but I'd be surprised if it wasn't already in the thousands, looking at everything from gender to race to the impact of varying lengths of time gaps on a resume.
Finally, the nature of this sort of field study is that even with everything going your way they are hard to do correctly. No simple analysis method works even if you did everything right and collected all the data appropriately. There is too much randomness, too much heterogeneity, too much structure, to allow any simple bit of statistics to give the correct interpretation. In short, unless this is your specialty, it would be trivially easy to get everything else right and still come to exactly the wrong conclusion.
For those who are not familiar with this kind of statistics, a classic example of how a simple analysis can go wrong is Sex Bias in Graduate Admissions: Data from Berkeley. In a simple aggregate analysis, it looked quite clearly that women were being admitted at lower rates than men, and thus bias was quite obvious to the point that the deans of the school were concerned this could be the basis for a lawsuit. It turned out it was a nice example of Simpson's Paradox, as it turned out the cause for the difference was that women were more likely to apply to departments that were crowded and competitive and thus harder to get into for everyone, while men were more likely to apply to departments that were less competitive.
If a similar condition existed in the employment sector, where you were applying to jobs in industry that turned out to vary in their selectivity in a way that you were not considering, this would mess up your analysis, and you cannot easily collect more information that would allow you to fix it. After all, I'm sure you weren't inclined to use random selection in your own employment search!
So, in summation, the worst that I could imagine happening is: you end up doing bad science that would reflect badly on you and would not be easily excused just because it wasn't intended for publication; you come to the wrong conclusions and in a way which could hurt innocent people; you casually report information that could be used in dangerous and damaging way; you end up being identified as the person responsible and it goes viral, so now the most famous thing you'll ever be known for was this thing you didn't intend as a serious study (and which could have gone horribly wrong); and, as a bonus, you could just end up wasting your time and the time of others for no benefit.
And since its the worst thing that can happen, I suppose you could also end up with a headache. Things can always be worse by adding a headache.
answered 2 days ago
BrianHBrianH
17.5k64172
17.5k64172
11
The question was "what's the worst that can happen", so this answer rightly applies a strict standard. But we don't generally demand scientific rigour from blog posts, journalism, or political advocacy. For example, interviewing an arbitrary sample of people in the streets is not a representative opinion poll, yet it is common practice in TV and newspaper reporting.
– henning
yesterday
4
@henning those organization do pay a price for their lack of rigor. In lost reputation as well as in lawsuits.
– A Simple Algorithm
yesterday
6
@A Simple Algorithm Not really.
– henning
yesterday
4
Oh, hey. It can get worse. Some SJW who is unstable could read your research, get offended, and decide to assassinate the presidents of the institutions you contacted, "just to teach them a lesson".
– James Martin
8 hours ago
@JamesMartin, You are correct. Assassination is one example of "actual violence" as mentioned in the first paragraph of the answer.
– Kyle A
7 hours ago
add a comment |
11
The question was "what's the worst that can happen", so this answer rightly applies a strict standard. But we don't generally demand scientific rigour from blog posts, journalism, or political advocacy. For example, interviewing an arbitrary sample of people in the streets is not a representative opinion poll, yet it is common practice in TV and newspaper reporting.
– henning
yesterday
4
@henning those organization do pay a price for their lack of rigor. In lost reputation as well as in lawsuits.
– A Simple Algorithm
yesterday
6
@A Simple Algorithm Not really.
– henning
yesterday
4
Oh, hey. It can get worse. Some SJW who is unstable could read your research, get offended, and decide to assassinate the presidents of the institutions you contacted, "just to teach them a lesson".
– James Martin
8 hours ago
@JamesMartin, You are correct. Assassination is one example of "actual violence" as mentioned in the first paragraph of the answer.
– Kyle A
7 hours ago
11
11
The question was "what's the worst that can happen", so this answer rightly applies a strict standard. But we don't generally demand scientific rigour from blog posts, journalism, or political advocacy. For example, interviewing an arbitrary sample of people in the streets is not a representative opinion poll, yet it is common practice in TV and newspaper reporting.
– henning
yesterday
The question was "what's the worst that can happen", so this answer rightly applies a strict standard. But we don't generally demand scientific rigour from blog posts, journalism, or political advocacy. For example, interviewing an arbitrary sample of people in the streets is not a representative opinion poll, yet it is common practice in TV and newspaper reporting.
– henning
yesterday
4
4
@henning those organization do pay a price for their lack of rigor. In lost reputation as well as in lawsuits.
– A Simple Algorithm
yesterday
@henning those organization do pay a price for their lack of rigor. In lost reputation as well as in lawsuits.
– A Simple Algorithm
yesterday
6
6
@A Simple Algorithm Not really.
– henning
yesterday
@A Simple Algorithm Not really.
– henning
yesterday
4
4
Oh, hey. It can get worse. Some SJW who is unstable could read your research, get offended, and decide to assassinate the presidents of the institutions you contacted, "just to teach them a lesson".
– James Martin
8 hours ago
Oh, hey. It can get worse. Some SJW who is unstable could read your research, get offended, and decide to assassinate the presidents of the institutions you contacted, "just to teach them a lesson".
– James Martin
8 hours ago
@JamesMartin, You are correct. Assassination is one example of "actual violence" as mentioned in the first paragraph of the answer.
– Kyle A
7 hours ago
@JamesMartin, You are correct. Assassination is one example of "actual violence" as mentioned in the first paragraph of the answer.
– Kyle A
7 hours ago
add a comment |
You're asking the wrong question.
... what is the worst that can happen?
Others have answered this. But it's the wrong question. What you should really ask is:
What's likely to happen?
You are likely to not get any statistically significant information, and probably not even a sound hunch about the reason you got more offers than your friend. You are likely to get into a mild amount of trouble at least with some of the potential workplaces when you retract your application or when they call up your references/former universities/etc. IMHO it is likely you will not have contributed even marginally with your experiment.
If you believe "affirmative-action"-type hiring is nothing but Tokenism and is inappropriate / discriminatory against people like your friend - act against it where you actually are present and have access to information, such as your next workplace; and in wider social contexts (e.g. participate in public awareness-raising campaigns, lobbying elected officials, organizing petitions, demonstrations etc.)
PS - Please do not construe this answer as an endorsement or criticism of "affirmative-action"-type hiring practices.
add a comment |
You're asking the wrong question.
... what is the worst that can happen?
Others have answered this. But it's the wrong question. What you should really ask is:
What's likely to happen?
You are likely to not get any statistically significant information, and probably not even a sound hunch about the reason you got more offers than your friend. You are likely to get into a mild amount of trouble at least with some of the potential workplaces when you retract your application or when they call up your references/former universities/etc. IMHO it is likely you will not have contributed even marginally with your experiment.
If you believe "affirmative-action"-type hiring is nothing but Tokenism and is inappropriate / discriminatory against people like your friend - act against it where you actually are present and have access to information, such as your next workplace; and in wider social contexts (e.g. participate in public awareness-raising campaigns, lobbying elected officials, organizing petitions, demonstrations etc.)
PS - Please do not construe this answer as an endorsement or criticism of "affirmative-action"-type hiring practices.
add a comment |
You're asking the wrong question.
... what is the worst that can happen?
Others have answered this. But it's the wrong question. What you should really ask is:
What's likely to happen?
You are likely to not get any statistically significant information, and probably not even a sound hunch about the reason you got more offers than your friend. You are likely to get into a mild amount of trouble at least with some of the potential workplaces when you retract your application or when they call up your references/former universities/etc. IMHO it is likely you will not have contributed even marginally with your experiment.
If you believe "affirmative-action"-type hiring is nothing but Tokenism and is inappropriate / discriminatory against people like your friend - act against it where you actually are present and have access to information, such as your next workplace; and in wider social contexts (e.g. participate in public awareness-raising campaigns, lobbying elected officials, organizing petitions, demonstrations etc.)
PS - Please do not construe this answer as an endorsement or criticism of "affirmative-action"-type hiring practices.
You're asking the wrong question.
... what is the worst that can happen?
Others have answered this. But it's the wrong question. What you should really ask is:
What's likely to happen?
You are likely to not get any statistically significant information, and probably not even a sound hunch about the reason you got more offers than your friend. You are likely to get into a mild amount of trouble at least with some of the potential workplaces when you retract your application or when they call up your references/former universities/etc. IMHO it is likely you will not have contributed even marginally with your experiment.
If you believe "affirmative-action"-type hiring is nothing but Tokenism and is inappropriate / discriminatory against people like your friend - act against it where you actually are present and have access to information, such as your next workplace; and in wider social contexts (e.g. participate in public awareness-raising campaigns, lobbying elected officials, organizing petitions, demonstrations etc.)
PS - Please do not construe this answer as an endorsement or criticism of "affirmative-action"-type hiring practices.
edited 22 hours ago
answered yesterday
einpoklumeinpoklum
25k140143
25k140143
add a comment |
add a comment |
The worst thing which can happen is that you get the study setup or the statistics wrong, and the interpretation in the blog. The world is too full of oversimplified blogs and video documentaries on perceived gender biases.
There is a lot of research by people who devote their scientific career to this, and the statistics of "who studies what" is quite influential. For example I (as a technical team lead doing a lot of technical interviews) observe that most women who started to study engineering 10 years ago (Central Europe) did think about the study choice well before studying, while for many men it was "the default option". This alone makes it reasonable to select them for the interviews with a higher percentage. I cant quantify this -> read research on it.
In the worst case (i assume that you are in a STEM field) it could cost you job opportunities to present a flawed analysis without peer-review.
add a comment |
The worst thing which can happen is that you get the study setup or the statistics wrong, and the interpretation in the blog. The world is too full of oversimplified blogs and video documentaries on perceived gender biases.
There is a lot of research by people who devote their scientific career to this, and the statistics of "who studies what" is quite influential. For example I (as a technical team lead doing a lot of technical interviews) observe that most women who started to study engineering 10 years ago (Central Europe) did think about the study choice well before studying, while for many men it was "the default option". This alone makes it reasonable to select them for the interviews with a higher percentage. I cant quantify this -> read research on it.
In the worst case (i assume that you are in a STEM field) it could cost you job opportunities to present a flawed analysis without peer-review.
add a comment |
The worst thing which can happen is that you get the study setup or the statistics wrong, and the interpretation in the blog. The world is too full of oversimplified blogs and video documentaries on perceived gender biases.
There is a lot of research by people who devote their scientific career to this, and the statistics of "who studies what" is quite influential. For example I (as a technical team lead doing a lot of technical interviews) observe that most women who started to study engineering 10 years ago (Central Europe) did think about the study choice well before studying, while for many men it was "the default option". This alone makes it reasonable to select them for the interviews with a higher percentage. I cant quantify this -> read research on it.
In the worst case (i assume that you are in a STEM field) it could cost you job opportunities to present a flawed analysis without peer-review.
The worst thing which can happen is that you get the study setup or the statistics wrong, and the interpretation in the blog. The world is too full of oversimplified blogs and video documentaries on perceived gender biases.
There is a lot of research by people who devote their scientific career to this, and the statistics of "who studies what" is quite influential. For example I (as a technical team lead doing a lot of technical interviews) observe that most women who started to study engineering 10 years ago (Central Europe) did think about the study choice well before studying, while for many men it was "the default option". This alone makes it reasonable to select them for the interviews with a higher percentage. I cant quantify this -> read research on it.
In the worst case (i assume that you are in a STEM field) it could cost you job opportunities to present a flawed analysis without peer-review.
answered 2 days ago
SaschaSascha
1,626313
1,626313
add a comment |
add a comment |
Thanks for contributing an answer to Academia Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2facademia.stackexchange.com%2fquestions%2f126930%2fa-social-experiment-what-is-the-worst-that-can-happen%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Comments are not for extended discussion; this conversation has been moved to chat.
– StrongBad♦
5 hours ago
2
Hiring biases in industry are off-topic.
– Azor Ahai
3 hours ago