Nearly two months after the infamous Christchurch mosque shootings claimed 51 lives in New Zealand, social media giant Facebook on Wednesday announced curbs on its live streaming feature. The move comes in a bid to limit the use of its services for causing harm or spreading hatred. The company, which has 2.38 billion monthly active users globally, said users will be barred from using the facility for a certain period of time in case they violate new Facebook live rules.
"Today we are tightening the rules that apply specifically to live. We will now apply a 'one strike' policy to 'live' in connection with a broader range of offenses. From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time for example 30 days starting on their first offense,” Facebook Vice President (Integrity) Guy Rosen wrote in a blogpost.
For example, any individual, who shares a link to a statement from a terrorist group with no context, will be also blocked immediately from using live for the set period of time.
"We plan on extending these restrictions to other areas over the coming weeks, beginning with preventing those same people from creating ads on Facebook... Our goal is to minimise risk of abuse on live while enabling people to use live in a positive way every day," Rosen added in the blogpost.
Apart from this, Facebook is also investing USD 7.5 million in new research partnerships with academics from three universities to collaborate on improving image and video analysis technology.
In March, over 50 people were gunned down at two Christchurch mosques by a self-described white supremacist, who broadcast live footage on Facebook. The first user report on the original video came in 29 minutes after the video started, and 12 minutes after the live broadcast ended.
RELATED
Facebook has, in a previous post, said the video was viewed fewer than 200 times during the live broadcast. Including the views during the live broadcast, the video was viewed about 4,000 times in total before being removed from Facebook.
Rosen, in the latest blogpost, explained that one of the challenges Facebook faced in the days after the Christchurch attack was a proliferation of many different variants of the video of the attack.
People had shared edited versions of the video that made it hard for its systems to detect despite deploying a number of techniques, including video and audio matching technology.
"That's why we're partnering with The University of Maryland, Cornell University and The University of California, Berkeley to research new techniques to detect manipulated media across images, video and audio; and distinguish between unwitting posters and adversaries who intentionally manipulate videos and photographs," he said.
Previously, if someone posted content that violated Facebook's Community Standards (on Live or otherwise), the company took down the post. If the user kept posting violating content, they were blocked them from using Facebook for a certain period of time.
Rosen said continued efforts would be critical to tackle "manipulated media, including deep fakes (videos intentionally manipulated to depict events that never occurred)".
Facebook has tightened its live streaming norms days ahead of an online extremism summit in Paris, called after the Christchurch mosque attacks. The summit will mark attendance of several political leaders from Europe, Canada and the Middle East, who will meet senior representatives from companies such as Facebook, Google and Twitter to eliminate terrorist material on these platforms.
New Zealand Prime Minister Jacinda Ardern, accompanied with French President Emmanuel Macron will chair the summit to co-ordinate international efforts to stop social media being used to organise and promote terrorism.