Friday, November 30, 2018



Google blocks gender pronouns including 'him' and 'her' from its AI tool that completes sentences over fears it might predict YOUR sex or gender identity incorrectly and offend you

Google is removing gender pronouns from the predictive text feature found in its Gmail platform.

The feature will no longer suggest pronouns that indicate a specific gender such as 'he', 'her', 'him' or 'she' for fear of suggesting the wrong one and causing offence.

Google staff have revealed the technology will not suggest gender-based pronouns because the risk is too high that its 'Smart Compose' technology might predict someone's sex or gender identity incorrectly. 

Gmail product manager Paul Lambert said a company research scientist discovered the problem in January.

He wrote: 'I am meeting an investor next week and Smart Compose suggested a possible follow-up question: "Do you want to meet him?" instead of "her".'

Consumers have become accustomed to embarrassing gaffes from auto-correct on smartphones but Google is being cautious around such a sensitive topic.

Gender issues are reshaping politics and society, and critics are scrutinising potential biases in artificial intelligence like never before.

Mr Lambert said the Smart Compose team of about 15 engineers and designers tried several workarounds, but none proved bias-free or worthwhile.

They decided the best solution was to limit coverage and implement a gendered pronoun ban.

It affects fewer than one per cent of cases where Smart Compose would propose something.

'The only reliable technique we have is to be conservative,' said Prabhakar Raghavan, who oversaw engineering of Gmail and other services until a recent promotion.

The company apologised in 2015 when the image recognition feature of its photo service labelled a black couple as gorillas.

In 2016, Google altered its search engine's autocomplete function after it suggested the anti-Semitic query 'are jews evil' when users sought information about Jews.

Google has banned expletives and racial slurs from its predictive technologies, as well as mentions of its business rivals or tragic events.

The company's new policy banning gendered pronouns also affected the list of possible responses in Google's Smart Reply.

That service allow users to respond instantly to text messages and emails with short phrases such as 'sounds good.'

Google uses tests developed by its AI ethics team to uncover new biases.  A spam and abuse team pokes at systems, trying to find 'juicy' gaffes by thinking as hackers or journalists might, Mr Lambert said.

SOURCE 

4 comments:

ScienceABC123 said...

Where is the world of my youth?

"Sticks and stones may break my bones, but names (words) will never hurt me."

Stan B said...

Well, it turns out that words CAN hurt you - if you have never experienced anything but the bubble-wrap world of your helicopter parents' overprotective nurturing.

Anonymous said...

I avoid Google like the plague !

Anonymous said...

When will hospitals be banned from assigning gender on birth certificates? We should wait until the child is 5 or 6 so they can pick their own "preferred" gender. Must also include "non-binary"