Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

November 10 2018

Kret
11:29
Reposted fromFlau Flau viaPaseroVirus PaseroVirus

November 04 2018

Kret
22:15
Reposted fromFlau Flau viaPaseroVirus PaseroVirus

October 31 2018

Kret
20:01
3166 e474 500
Kret
09:26
0478 3510 500
Reposted fromsavatage savatage viakuroinekochris kuroinekochris
Kret
09:20
Kret
09:19
Kret
09:10
Reposted fromFlau Flau
Kret
09:07
Reposted fromFlau Flau vialefu lefu

October 30 2018

Kret
20:26
5681 76d3 500
Reposted fromRockYourMind RockYourMind viaPaseroVirus PaseroVirus

October 28 2018

Kret
22:45
Reposted fromNarcisse-Noir Narcisse-Noir viaPolinda Polinda
Kret
22:17
2596 f8dd 500

October 26 2018

Kret
22:00
Reposted fromFlau Flau viakamlot kamlot
Kret
21:49

October 24 2018

Kret
00:21
Uh oh.

> 10 years of sexist training data, it's not difficult to understand the algorithm doesn't know that male success in tech in large parts due to discrimination.
> One of the first, if not *the* first, things the A.I would notice is that the success rate of male appliccations is some 8 times higher than those of female ones.

Again, depends on what the "success" meant. True, it sounds like the measure was more or less spuriously biased, but it could very well not be. What do you mean by the success rate that is being 8 times higher for males? What if the "8 times higher success rate at employment" meant that there are 8 times more males hired that females? But how many males and how many females applied?

And copying myself from my other post: I'd disagree that it is easy to see that it's not difficult to understand the algorithm doesn't know that male success in tech in large parts due to discrimination. Quite the opposite. I'd risk a bet that an average person's understanding of machine learning is actually very, very vague. Depending on what associations they get, what is their world view, their conclusions could wildly differ.

By the way, the meaning of word "discrimination" has sadly split into two; one being the more common, pejorative "treating someone unjustly based on their traits unrelated to judgement" and the other being neutral "treating someone with less preference", neutral as in free of ideology and emotions. The former is slippery in itself, because there is plenty of different definitions of justice, the latter is not inherently wrong in my opinion.

> What they could have theoretically done is list 100 resumes and pick the top 3 guys and the top 3 women regardless of their relative ranking and just read those resumes yourself. It's not automatic but it's a heck of a lot better than reading 100 resumes, especially if you're committed to hiring men and women equally anyway. Then train a different AI with this data.

Oh boy, now THAT would be discrimination based on sex, not qualifications, if only the score was not very strongly biased by sex. Imagine there are 100 males and their objective skill varies linearly 1 through 100, and 10 females, their objective skill also varying linearly 1 through 100. Top three males are now 98, 99 and 100, top three females are now 80, 90, 100. While training AI on data with male abundance could have taught it that males posess some secret superpower it does not comprehend, training it on the other data most likely would teach it that females posess some secret superpower it does not comprehend.

Fighting a perceived, untested, unmeasured bias with arbitrary, not thought-out counter-bias is not a good idea.

> Especially if you're committed to hiring men and women equally anyway.

That's where our opinions strongly differ. No, I'm not, not at all. Why would I be? I strive not to care about the gender, that's my take on gender equality. If I receive one hundred applications from males and ten from females, I won't be surprised or disappointed if I end up with ten males and one female.

> If it were true that it only cares about qualifications or at least mostly cares about qualification, then female college graduates shouldn't've been downgraded.

No. You assume that female college is equal to other colleges. This simply does not have to be the case, I see no reason why female college could not be worse. And if it is the case, then it would be correct to downgrade female college graduates.

> Maybe if they ran the applications through a gender neutralizing machine first (replacing all instances of gender specification with something neutral), they could have successfully trained the AI to find the best applicants.

Maybe yes, maybe no. They surely would filter some information out from the data, but no more, no less. It wouldn't be inherently advantageous or disadvantageous. Had this information been actually correlated to applicants' worth for the job, they would lose out. Had this information been irrelevant, they would gain by removing some noise. You can't tell which case they were dealing with, I can't tell either. If I am wrong and you actually had reasons to lean towards that particular option, please point me to the sources. I'll gladly learn. But then again, I'm afraid I won't take your moral beliefs as a valid argument.

October 23 2018

Kret
23:15
> That's literally a dril Tweet. So for you "right-wing memelords" are a bunch of emotional, easily triggerd idiots. So much for the "logic and reason" right.

I have no idea who dril is. And I don't know how did you get to a conclusion that for me "right-wing memelords" are a bunch of emotional, easily triggered idiots. I'm just saying that openly telling someone they are "constantly wrong on every position" and expressing scolding disbelief isn't a friendly way to approach another person. Pretty much no one likes to have their world view challenged and to some extent it will be received as a personal attack, no matter how open-minded someone aspires to be; don't make it even harder by actually making it a personal attack.

> If, for example, 10% of your data contains the word female and 90% male, it's not hard to see why an algorithm would favor one side. The stoped using it a while ago becaus it wasn't working, not becous it got public.

You are kind of right, but not entirely. I'd argue that it is easy to see that if an algorithm is fed data with unequal representation of values, it is likely to favor the majority, especially if it wasn't true it would fit your narrative (see previous point).

If you think for example about medicine and comorbidity, controlling for spurious factors is an extremely important aspect of researches. Since that and machine learning are soooomewhat related, perhaps in machine learning there are well developed methods against biases too? Let's face it, I actually used to learn about machine learning a bit and have a high chance of knowing more than a random person on the internet, but still it only means I kind of vaguely remember the basics and derive my reasoning from some shreds I didn't yet forget from university, while the gaps are filled with guesswork, other related facts and worldview. It's really not that hard for someone else to use different shreds, guesswork, related facts and worldview, and come to a contrary conclusion.

Ideological and emotional charge doesn't help either. Notice sentences like "Top U.S. tech companies have yet to close the gender gap in hiring", "the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way.". For you the first one is probably positive/hopeful (gender gap is wrong, closing it is good!), the other probably negative (gender-neutral is good, not doing something in a gender-neutral way is bad!). But why? This assumes a priori that gender gap is wrong and gender-neutrality is good. Are you sure of that? What would convince you otherwise? If there's no such thing, then it is merely faith, not an informed opinion.

I, for example, think that gender gap in different professions is not inherently wrong, even if it is caused by our culture and history. Just let things roll inertly if they aren't directly hurting anyone. Nudge it subtly for the better here and there. Everyone is used to girls playing with dolls? Fuck it, I don't care, let them give dolls to girls, let the girls play with dolls. A girl wants to play with a toy car? Cool, give her a toy car. A boy wants to wear a skirt? Not necessarily a good idea in many parts of the world. Sure he might have the right to do so, a lot of people might be ok with that, but you have to weight the pros of being and behaving like who you want to be against cons of breaking the norms, no matter how stupid or unjust they might be in your opinion or even remotely objectively bad they actually might be.

Humans just tend to cluster, in many cases they cluster around pretty objectively weird shit. Well, fuck. Tough luck. Integrity of human clusters is a value in itself too. Personally I often hate it, but no amount of my hate will change that, so there's no point in struggling.

Just like many people cluster over fighting for gender equality as a value in itself, while other people cluster over fighting with the first ones, which is pretty fucking ironic.
Kret
08:05
Well, this works both ways. If you upfront assume others are "left-wing feminazis", you're not going to get anywhere either

October 22 2018

Kret
21:31
Actually such interpretation is not necessarily correct either. A lot depends on what the model considered a "success" and what made a candidate "good" in the training set, quality, quantity and preparation of data etc.

My guess (emphasis on "guess") would be that a "good" candidate was one that was eventually hired - and only that. Then it would be fairly easy for a possible bias against women to carry over into the model. A recruiter sees a woman, especially one "brandishing" her "womanness" in her CV, "women school", "women team" etc., the recruiter thinks "ewww, too much woman" and doesn't hire her, then the model learns something similar.

But then again, the "right-wing memelords" could be right all along as well. The measure of success in the data could be virtually unbiased (idk, % of commits to repo that broke the build, code coverage %, whatever) and women simply were objectively worse at that. After all there *are* differences between sexes, for example men seem to be significantly more likely to have autism spectrum disorders - maybe that's just how it is, men are better suited for some technical jobs than women and there is nothing wrong about it?

Either way, trying to force a change too much, righteous or not, tends to rebound. Force a 50-50 parity in a "systematically sexist" field and watch women's life getting absolutely miserable, regardless of whether there really was sexism in the first place or not and regardless whether a particular woman was competent or not, because now she will be mostly seen as a painful necessity, not an actual asset.

If you really want to get anywhere, don't call out "systematic sexism" and "right-wing memelords" wherever it satisfies your need to stand for something, or you will watch the "right-wing memelords" become even "rightier" and "memier" and no one will benefit from that, and definitely not the women you allegedly want to fight for in the first place.
Reposted bymangoe mangoe
Kret
08:45
2874 2dc1
Reposted fromjustsomekat justsomekat viaRat-Inc Rat-Inc
Kret
08:23
Reposted fromFlau Flau
Kret
08:20
8508 69cb 500
Reposted frombehcio behcio viahugostiglitz hugostiglitz
Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl