Social networks and other online content providers will have to remove paedophile and terrorism-related content from their platforms within the hour or face a fine of up to 4% of their global revenue under a French law voted in on Wednesday.
Companies such as Facebook, Twitter, YouTube, Instagram and Snapchat will have 24 hours to remove other “manifestly illicit” content, according to the law, which sets up a specialised digital prosecutor at the courts and a government unit to observe hate speech online.
Justice Minister Nicole Belloubet told parliament the law will help reduce online hate speech.
"People will think twice before crossing the red line if they know that there is a high likelihood that they will be held to account," she said.
However, free-speech advocates have criticised the new law, saying it will curtail the democratic right to freedom of expression.
The Computer & Communications Industry Association, an advocacy group with offices in Washington and Brussels, said it was concerned the French legislation "could lead to excessive takedowns of content as companies, especially startups, would err on the side of caution.”
National laws combating harmful content online can negatively affect what information is accessible in other countr… https://t.co/r1LxLzVxV5— Wikimedia Policy (@Wikimedia Policy) 1589320580.0
La Quadrature du Net (LQDN), an online civil liberties defence group, said in a statement that the legislator should have instead targeted the Internet giants' business models. It said it was unrealistic to think content could be withdrawn within the hour and the law was unnecessary.
"If the site does not censure the content (for instance because the complaint was sent during the weekend or at night), then police can force Internet service providers to block the site everywhere in France," it said.
Twitter France public affairs chief Audrey Herblin-Stoop said the company would continue to work closely with the government to build a safer Internet and fight against illegal hate speech, while protecting an open internet, freedom of expression and fair competition.
Herblin-Stoop said it was a top priority to ensure public debate was civil, adding that Twitter's investments in technologies that signal hate speech will reduce the burden on users of having to call out illicit content.
For one in two tweets on which the company has taken action, it had already been alerted by software, compared to 1 in 5 in 2018, she told Reuters.
Facebook did not return calls and emails seeking comment, Google and Snapchat were not immediately available for comment.
(FRANCE 24 with REUTERS, AP)