News Nation Logo

Google rolls out artificial intelligence tool 'Perspective' to combat online trolls

Seventy-two Per Cent Of American Internet Users Have Witnessed Harassment Online And Nearly Half Have Personally Experienced It, Said Jared Cohen, President Of Google's Jigsaw Technology Incubator.

PTI | Updated on: 25 Feb 2017, 08:55:27 AM
American search engine Google rolls out artificial intelligence tool to combat online trolls

Paris:

Google said it will begin offering media groups an artificial intelligence tool designed to stamp out incendiary comments on their websites.

The programming tool, called Perspective, aims to assist editors trying to moderate discussions by filtering out abusive "troll" comments, which Google says can stymie smart online discussions.

"Seventy-two per cent of American internet users have witnessed harassment online and nearly half have personally experienced it," said Jared Cohen, president of Google's Jigsaw technology incubator.

"Almost a third self-censor what they post online for fear of retribution," he added in a blog post yesterday titled "When computers learn to swear".

Perspective is an application programming interface (API), or set of methods for facilitating communication between systems and devices, that uses machine learning to rate how comments might be regarded by other users.

The system, which will be provided free to media groups including social media sites, is being tested by The Economist, The Guardian, The New York Times and Wikipedia.

Many news organizations have closed down their comments sections for lack of sufficient human resources to monitor the postings for abusive content.

"We hope we can help improve conversations online," Cohen said.

Google has been testing the tool since September with The New York Times, which wanted to find a way to maintain a "civil and thoughtful" atmosphere in reader comment sections.

Perspective's initial task is to spot toxic language in English, but Cohen said the goal was to build tools for other languages, and which could identify when comments are "unsubstantial or off-topic".

Twitter said earlier this month that it too would start rooting out hateful messages, which are often anonymous, by identifying the authors and prohibiting them from opening new accounts or hiding them from internet searches.

Last year, Google, Twitter, Facebook and Microsoft signed a "code of good conduct" with the European Commission, pledging to examine most abusive content signalled by users within 24 hours. 

Google said it will begin offering
media groups an artificial intelligence tool designed to stamp
out incendiary comments on their websites.
The programming tool, called Perspective, aims to assist
editors trying to moderate discussions by filtering out
abusive "troll" comments, which Google says can stymie smart
online discussions.
"Seventy-two per cent of American internet users have
witnessed harassment online and nearly half have personally
experienced it," said Jared Cohen, president of Google's
Jigsaw technology incubator.
"Almost a third self-censor what they post online for fear
of retribution," he added in a blog post yesterday titled
"When computers learn to swear".
Perspective is an application programming interface (API),
or set of methods for facilitating communication between
systems and devices, that uses machine learning to rate how
comments might be regarded by other users.
The system, which will be provided free to media groups
including social media sites, is being tested by The
Economist, The Guardian, The New York Times and Wikipedia.
Many news organisations have closed down their comments
sections for lack of sufficient human resources to monitor the
postings for abusive content.
"We hope we can help improve conversations online," Cohen
said.
Google has been testing the tool since September with The
New York Times, which wanted to find a way to maintain a
"civil and thoughtful" atmosphere in reader comment sections.
Perspective's initial task is to spot toxic language in
English, but Cohen said the goal was to build tools for other
languages, and which could identify when comments are
"unsubstantial or off-topic".
Twitter said earlier this month that it too would start
rooting out hateful messages, which are often anonymous, by
identifying the authors and prohibiting them from opening new
accounts, or hiding them from internet searches.
Last year, Google, Twitter, Facebook and Microsoft signed
a "code of good conduct" with the European Commission,
pledging to examine most abusive content signalled by users
within 24 hours. 

For all the Latest Business News, Download News Nation Android and iOS Mobile Apps.

First Published : 25 Feb 2017, 08:34:00 AM

Videos