In Ferdo se še vedno fokusira na povsem nepomembne detajle, kako (ne)presenetljivo.
Glej ferdo, tako kot jagibabi nisem tudi tebi ne bom ti 34x pisal istega v kolikor nočeš dojet. Vse je napisano in obrazloženo glede te raziskave, po tvoji zaslugi je tako opisana tudi druga stran kovanca, vsak si lahko lepo sam prebere in ustvari mnenje, midva skupaj ne bova prišla. Lahko ti pa samo še citiram že napisane poste v kolikor bi rad na vsak način bluzil naprej po svoje in glede na tvoj zadnji post mi drugega tudi ne ostane - pa ponovimo torej:
You really
don’t need an AI to figure this out unless you’re blinded by political correctness, it’s obvious. Oh wait — they are blinded by political correctness.
A new
study out of Cornell reveals that the machine learning practices behind AI, which are designed to flag offensive online content, may actually “discriminate against the groups who are often the targets of the abuse we are trying to detect,” according to the study abstract.
The study involved researchers training a system to flag tweets containing “hate speech,” in much the same way that other universities are developing systems for eventual online use, by using several databases of tweets, some of which had been flagged by human evaluators for offensive content.
“The results show evidence of systematic racial bias in all datasets, as classifiers trained on them tend to predict that tweets written in African-American English are abusive at substantially higher rates. If these abusive language detection systems are used in the field they will, therefore, have a disproportionate negative impact on African-American social media users,” the abstract continues.
Many mainstream black liberals and black media outlets commonly talk about white people the same way that white supremacists talk about black people. This is not a big secret. In fact, it’s impossible to miss unless you’re mentally blocking it out. But, what happens when the AI spots it? Well then, the AI must be badly designed because it would adversely impact the people behaving the worst, but they’re “the wrong people.” Why are they the wrong people? Because it’s not “woke” to hold everyone to the same standards anymore.
In dejansko se to tudi dogaja - ali gre za relativiziranje in napeljevanje zadeve, da problem tiči drugje in obnašanje, kot da ljudje ne zmorejo kontrolirati svojega načina komunikacije ali pa za dejanske obsodbe v stilu:
The algorithms that detect hate speech online are biased against black people
A new study shows that leading AI models are 1.5 times more likely to flag tweets written by African Americans as “offensive” compared to other tweets.
...in drugimi zadevami v podobnemu stilu. V glavnem zanikanje, zanikanje pa še enkrat zanikanje. Bohnedej, da bi kdo rekel bobu bob in si nalil čistega vina.
Nee, to pa ne, boš izpadel nestrpen in to se pa dandanes ne spodobi.
Zdaj pa hitro, hitro, shoot the messenger and ignore the message