> Computer code written by women has a higher approval rating than that written by men - but only if their gender is not identifiable, new research suggests.
That's because women on Github usually make pull-requests like "Fix typo in comment".
Of course those are easier for maintainers to accept than actual changes that could potentially break the program's functionality.
Question: If that's true then why don't all women submit code under a non-gender online handle?
Answer: All women are vain whores who only submit code because they want approval attached to their name.
And just because it's a study doesn't mean it's not true. You have to evaluate it based on its methodology. While this study is not perfect, it was decently executed.
Wrong. In fact, women often contributed more lines of code. see
> If that's true then why don't all women submit code under a non-gender online handle?
Because "women" is not one person. Some women are different than others. One difference may be that one does not disclose their gender in an online profile.
>Answer: All women are
So you took a property which is *clearly* known to apply to a proper subset of women, and assumed the cause of this property holds for all women?
Here's a question for you: if all women are vain whores, and vain whores submit code under a gender online handle, why are some women submitting code under a non-gender online handle?
Hint: your original answer is wrong, dumbfuck
>being this buttblasted
Jesus fuck are you a girl?
> reasonable rational arguments
"And just because it's a study doesn't mean it's not true. You have to evaluate it based on its methodology. While this study is not perfect, it was decently executed."
>Out of 4,037,953 GitHub user profiles with email addresses, we were able to identify 1,426,121 (35.3%) of them as men or women through their public Google+ profiles
>We are the first to use this technique, to our knowledge.
35.3% Omg, really? You mean, you expect this to be a comletely random representative sample? Well fuck me sideways.
>Could other GitHub users, who weren't snooping around on people's G+ profiles, identify the gender of a person contributing a pull request? To determine that, they created a sample of random profiles and used a combination of tools (including a panel of human judges) to infer gender from profile names and pictures. A profile was considered gender-neutral if it had a non-gendered user name like "fuzzlewump" and an identicon instead of a photo. Others were grouped into male and female.
HOOOOLLLLLLY SHIT. Science, brah! Why didn't they just introduce some fake results also, you know, coz, who cares? What about asking why the code had been rejected or whatever?
Also, on the Q&A page from a peer of one of the authors of this 'study':
>You do understand that the premise of this is completely false? That an evaluation of how many pull requests are accepted/rejected is of no value at all of it doesn't consider why the request was accepted/rejected. ie was it rejected because of bad coding, or because it didn't fit with the project aims and/ethos. For example anyone of those rejected pulls (from either gender (identifiable or not) might have been because the particular code modifictaion was a duplicate of something someone else had already done, or because the project owner doesn't want his/her code to contain that capability. etc.
>This looks like another complete misuse of data to represent something which the data doesn't prove in anyway, shape or form. Surely if you wanted this study to be of any value you would have gone back to the project owners to understand why rejects were rejected. Not assumed it was always because of gender bias and/or bad coding.
>I'm afraid this looks like so many other studies which reach similar conclusions. Performed with an already decided agenda. Hence the starting assumption that rejects/accepts would be because of gender bias, which led to the incorrect conclusion that this somehow gave some indication of the coding abilities of the identifiable genders, and bias against identified genders. All of which was reached without ever examining the code itself, or the actual reason for the reject/accept as given by the project owner. It's like you don't understand that correlation doesn't equal causation, and went onto produce a report to prove it.
I already posted this at you. The same still applies.
Being a good programmer is not useful alone if you have any ambitions at all.
I would never accept a code contribution from a woman. The only code women contribute is code of conduct. Plus women can't handle the banter. You have to pretend their code is perfect and wear kid gloves around them and walk on egg shells because if you criticize them and hurt their feelings they will accuse you of rape and they will have you arrested as a sex offender. Women never advance anything, they just end up lowering the standards of what is acceptable and that is detrimental to progress in the long run. Women need to stay at home looking pretty, making and raising babies, cooking and cleaning.
>35.3% Omg, really? You mean, you expect this to be a comletely random representative sample?
I don't think that's what they were going for. Under normal circumstances, I would agree that a random sample is the golden standard, but it's really easy to get the entire population in this case.
The real problem is that they narrow their population to G+ users. My gut feeling tells me this nullifies the results of their study because (I think) G+ Github users are fundamentally different than non G+ github users
>"Could other GitHub users..."
Yeah, they really fucked up here. I guess we will have to wait (for peer review) to see if this tactic irrevocably fucked their study.
This dyke wishes she could have my bodily fluids but i'd never let her taste my sweet dick nectar!!
Self-selection bias lads. For most part, only the most talented women become programmers. Most women who are mediocre and bad simply quit. While plenty of mediocre men becomes programmers anyway, which brings men's average down.
Ever wonder why faggot men are happy and dykes are so fucking angry? They've done studies where most dykes are not dykes by choice. They are "forced" into being dykes because they are too ugly or repulsive to get the men they want. It's like guys who turn gay in prison because they have no other choice. Gay men on the other hand are usually seem to be more attractive on average and could get tons of pussy if they wanted too.
Tl;dr - most dykes are faking it and this makes them into angry anti-men blue haired beasts.
hold me, someone hold me, i can't breath....
>Big Black C**k news?
It sure is amaaaaaazing how women appear to just be better than men at everything. I mean, surely there cannot be cherry picked examples or the hiding of copious amounts of evidence to the contrary. Just look at how much better women are at driving compared to men also - btfo boys!
That seems like a lie. I'm pretty sure that gender has nothing to do with being a programmer.
>but only if their gender is not identifiable, new research suggests.
So how do we know if the average man programmer that leaves his gender not identifiable is even better? Seems like a cheap way to compare the top woman to the average man
No, it doesn't. To think so is so obviously ridiculous I can't help but think you're being weirdly defensive here. If the solutions are equivalent, then in many (probably most) cases yes, fewer lines is better. But e.g. a refactor is a more crucial than e.g. removing some curly braces from a single line if statement to comply with coding standards, and yet one of these has considerably fewer LoC involved (protip: it's not the important one).
In an ideal world, this is probably the case. But at this point, it's pretty undeniable that there is a large social pressure against women programming, the same way there's a pressure to keep, idk, men from doing ballet. And I'm willing to bet that your average male ballerina (ballerino?) is going to be better than your average female ballerina, much the same way your average female programmer on github is going to be better than your average male programmer on github. Why? Basically, natural selection. If you are so headstrong about getting into that area that you are unabated by social pressures, chances are, there's a damn good reason (or at least, a greater chance of there being a reason than when you have no social pressure against).
How can you people be this stupid? If solution A does fucking bounds checking on an array access while solution B doesn't, solution A has strictly more lines of code, and yet is strictly fucking better. Again, if the solutions are *equivalent*, then there is a very good chance that the smaller solution is better. But nowhere does it say that given an equivalent solution their code was longer, only that they have on average committed more lines of code.
If you fucking regex all newlines and contiguous whitespace out of a source file, you're not a better programmer, you're a god damned retard.
>if the solutions are *equivalent*, then there is a very good chance that the smaller solution is better
Actually, I'm sure I would even go that far. Real programs aren't code golf. Sure, I could write this context switch as a series of gotos, and that would almost certainly be fewer lines of code while being logically equivalent, but any time I or someone else had to modify the code later, they'd probably want to take it out back and shoot it.
I don't see that with this study.
I see that when you have a group of insiders writing code and they are gender neutral, edits they make go through at about the same rate. When gender is identified, females have the higher acceptance rate,
When it is a bunch of outsiders writing code and the coders are gender neutral, females are more likely to have edits accepted. Once people are gendered, males have a higher acceptance rate.
Confidence intervals here need to be shrunk a little more but it appears that identifying your gender in the inside or outside scenario is a bad idea period.
At any rate, the study needs to actually be peer reviewed so the article OP is linking to is nothing more than click bait for now.
Having now summarily read the paper, I am a little disturbed by their use of statistics. I have a nagging feeling that they don't even know what a null hypothesis is.
I am pretty convinced that there probably is a gendered difference in all this. And it could be a very interesting study. But I do not trust any of they conclusions after reading this.
The entire premise of the paper is flawed
> Our main research question was
To what extent does gender bias exist among people who judge GitHub pull re-
There is no null hypothesis here. Nothing you can test true or false. You can not refute it. If you ask a question this way, you will always find and effect.
When spotting a new planet, you don't go; "how much water is there on that planet". You go; "Is there water on that planet in the first place". Your null hypothesis is; "There is no water", and then you try to refute it.
When looking at the figure 5, it is obvious that you cannot refute the null hypothesis of men and women being equal with a confidence of less than 32% (one \Sigma), the lowest confidence bound ever used in scientific statistics. But I wouldn't trust their calculations for one second.
>Torture numbers, and they'll confess to anything. - Gregg Easterbrook
>>has a vagina
>>uploads cute little avatar with a fun smiling wink
>>gets approved by sperglord droolmaster
Next you'll say women drive better than men because they get out of police tickets more often
Of course, the only type of person who would ask this question in the first place would be a social justice warrior.
Seriously, what kind of researcher asks "who writes better code"? What type of fucking moron would actually even ask this sort of moronically brain-dead of a question in the first place?
And using github? Are you serious? Jesus christ the entire thing is retarded.
Literally no one but social justice warriors would care about such an irrelevant thing, and the social justice warrior will always twist/bend/warp any statistic to fit their view if it doesn't already. We've seen it time and time again.
this. With the standard practice of p<.05, if left unchecked (and this is a study in social science, so of course it is unchecked), 1 in 20 studies will come up with a false positive
Am I the only one here with a fake Gmail and Github account that uses pictures of a qt chinese girl to pass and I purposefully make shitty little contributions to big projects knowing full well that my picture alone will get it pulled?