Tuesday, October 02, 2018






Hate speech and machine intelligence

FOR ALL THE advances being made in the field, artificial intelligence still struggles when it comes to identifying hate speech. When he testified before Congress in April, Facebook CEO Mark Zuckerberg said it was “one of the hardest” problems. But, he went on, he was optimistic that “over a five- to 10-year period, we will have AI tools that can get into some of the linguistic nuances of different types of content to be more accurate in flagging things for our systems.” For that to happen, however, humans will need first to define for ourselves what hate speech means—and that can be hard because it’s constantly evolving and often dependent on context.

“Hate speech can be tricky to detect since it is context and domain dependent. Trolls try to evade or even poison such [machine learning] classifiers,” says Aylin Caliskan, a computer science researcher at George Washington University who studies how to fool artificial intelligence.

In fact, today’s state-of-the-art hate-speech-detecting AIs are susceptible to trivial workarounds, according to a new study to be presented at the ACM Workshop on Artificial Intelligence and Security in October. A team of machine learning researchers from Aalto University in Finland, with help from the University of Padua in Italy, were able to successfully evade seven different hate-speech-classifying algorithms using simple attacks, like inserting typos. The researchers found all of the algorithms were vulnerable, and argue humanity’s trouble defining hate speech contributes to the problem. Their work is part of an ongoing project called Deception Detection via Text Analysis.

If you want to create an algorithm that classifies hate speech, you need to teach it what hate speech is, using data sets of examples that are labeled hateful or not. That requires a human to decide when something is hate speech. Their labeling is going to be subjective on some level, although researchers can try to mitigate the effect of any single opinion by using groups of people and majority votes. Still, the data sets for hate-speech algorithms are always going to be made up of a series of human judgment calls.

SOURCE 


3 comments:

Anonymous said...

Liberals use hate speech freely to attack those they oppose.

Bill R. said...

Hate speech IS Free Speech. The trouble with Suckerberg is he only shuts down one side.

Anonymous said...

If you go to a particular site, and don't like what's being posted, DON'T GO THERE ANYMORE!!!!! If you see something on social media, such as Facebook, that is offensive to you, simply block that poster from your page. Old time Brooklyn rhyme--Sticks and stone may break my bones, but words will never hurt me.