New federal legislation aims to hold social media platforms liable for misinformation
Mark Zuckerberg (AFP)

Four federal Democratic lawmakers will introduce legislation in the House on Friday that would hold websites and social media platforms liable for spreading misinformation and harmful content.

The focus of the bill is the use of algorithms that drive third-party content to people's feeds based on their personal information and browsing history.

The Justice Against Malicious Algorithms Act would end civil immunity for Facebook and other platforms that knowingly or recklessly use algorithms or other technology to recommend content that “materially contributes to physical or severe emotional injury."

If passed, the bill would allow people to sue in cases where someone acts on misinformation or damaging content placed in their feed through personalized algorithms — for example, taking their own life.

U.S. Rep. Frank Pallone (D-06), along with Reps. Mike Doyle (D-PA), Jan Schakowsky (D-IL), and Anna Eshoo (D-CA), are behind the bill, which would amend the section of the 1996 Communications Decency Act that shields social platforms from responsibility for problematic content posted by its users.

The legislation comes a week after Facebook whistleblower Frances Haugen testified before a Senate committee that Facebook's algorithms promote angry content to keep users engaged — and target children to ensure a lifetime of internet addiction.

The bill does not apply to search features or algorithms that don't use personalization, web-hosting or data-storage and transfer internet infrastructure, or online platforms with fewer than 5 million unique monthly visitors.

“Social media platforms like Facebook continue to actively amplify content that endangers our families, promotes conspiracy theories, and incites extremism to generate more clicks and ad dollars. These platforms are not passive bystanders — they are knowingly choosing profits over people, and our country is paying the price," Pallone said in a statement. “The time for self-regulation is over, and this bill holds them accountable. Designing personalized algorithms that promote extremism, disinformation, and harmful content is a conscious choice, and platforms should have to answer for it."

Some critics have raised First Amendment concerns about the plan. A New Jersey-based cybersecurity expert isn't a fan, calling it censorship.

“I don't believe censoring individuals through legislation on the organizations that provide a vehicle for a person's voice will be effective," said Milan Baria, CEO of Blueclone Networks. “Rather lawmakers should focus on mandating disclosure of any algorithms that promote content so that the end user is fully aware this is happening. For example, if a platform decides to personalize content, there should be a disclaimer near that content which specifies why that content was shown."


New Jersey Monitor is part of States Newsroom, a network of news bureaus supported by grants and a coalition of donors as a 501c(3) public charity. New Jersey Monitor maintains editorial independence. Contact Editor Terrence McDonald for questions: info@newjerseymonitor.com. Follow New Jersey Monitor on Facebook and Twitter.