Trump isn't the gravest threat to our democracy — it's something even less human
Some data points for your consideration:
- Last Saturday in Chicago’s affluent Old Irving Park neighborhood, Donald Trump’s secret, masked police violently pulled a 67‑year‑old U.S. citizen — a member of a local running club returning to his home from a run — out of his car and threw him to the street, where they assaulted him with such force that they broke six ribs and left him with internal bleeding.
- Trump is openly taking bribes, publicly ordering political prosecutions, murdering people in naked violation of both US and international law, all while claiming the Supreme Court gave him absolute immunity from prosecution for any crime.
- An MIT study finds that lies presented as news travel six times faster across social media than truths.
- While more than 75 percent of Americans trusted the news 50 years ago, today that number is a mere 28 percent, with only 8 percent of Republicans believing what they see or read in mainstream outlets.
These are all the same story, and they all largely derive from a single source, a mind poison that was introduced into the American (and world) mindstream in a big way about two decades ago.
It’s called the algorithm, and if we’re to survive as a republic it must be regulated the same way we regulate anything else that produces addictive, compulsive behavior that twists and distorts people’s lives.
Possibly the greatest threat to humanity at this moment is the algorithm.
It can twist and wreck people’s minds and lives — tear apart families and destroy countries — in a way that can be more rapid and more powerful than heroin, cocaine, or fentanyl. And yet it is completely unregulated.
An algorithm is a software program/system that inserts itself between humans as we attempt to communicate with each other. It decides which communications are important and which are not, which communications will be shared and which will not, what we will see or learn and what we will not.
As a result, in a nation where 48 percent of citizens get much or most of their news from social media, the algorithms driving social media sites ultimately decide which direction society will move as a result of the shared information they encourage or suppress across society.
When you log onto social media and read your “feed,” you’re not seeing (in most cases) what was most recently posted by the people you “follow.” While some of that’s there, the algorithm also feeds you other posts it thinks you’ll like based on your past behavior, so as to increase your “engagement,” aka the amount of time you spend on the site and thus the number of advertisements you will view.
As a result, your attention is continually tweaked, led, and fine-tuned to reflect the goal of the algorithm’s programmers. Click on a post about voting, for example, and the algorithm then leads you to election denial, from there to climate denial, from there to Qanon.
Next stop, radicalization or paralysis. But at least you stayed along for the ride and viewed a lot of ads in the process.
Algorithms used in social media are not tuned for what is best for society. They don’t follow the rules that hundreds of thousands of years of human evolution have built into our cultures, religions, and political systems.
They don’t ask themselves, “Is this true?” or “Will this information help or hurt this individual or humanity?”
Instead, the algorithms’ main purpose is to make more money for the billionaires who own the social media platforms.
If telling you that, as Trump recently said, climate change “may affect us in 300 years” makes for more engagement (and more profit for the social media site) than does telling the truth about fossil fuels, it will get pushed into more and more minds.
No matter that such lies literally threaten human society short-term and possibly the survival of the human race long-term.
As Jaron Lanier told the Guardian:
“People survive by passing information between themselves. We’re putting that fundamental quality of humanness through a process with an inherent incentive for corruption and degradation. The fundamental drama of this period is whether we can figure out how to survive properly with those elements or not.”
Those of a certain age or students of the advertising business may remember when Vance Packard’s book The Hidden Persuaders set off a panic across America in the 1960s, claiming that movies and TV shows were inserting micro-bursts of advertisements that flew below the radar of consciousness but nevertheless changed behavior.
The classic example was popcorn flashing on movie screens with the words “Buy Now!” It provoked a panic in Congress and multiple attempts at legislation to outlaw it before the practice was debunked as ineffective.
But algorithms are far from ineffective. They’re arguably one of the most powerful forces on the planet today.
The premise of several books, most famously Shoshana Zuboff’s The Age of Surveillance Capitalism, is that the collection of massive amounts of data about each of us — then massaged and used by “automated” algorithms to increase our engagement — is actually a high-tech form of old fashioned but extremely effective thought control.
She argues that these companies are “intervening in our experience to shape our behavior in ways that favor surveillance capitalists’ commercial outcomes. New automated protocols are designed to influence and modify human behavior at scale as the means of production is subordinated to a new and more complex means of behavior modification.” (Emphasis hers.)
She notes that “only a few decades ago US society denounced mass behavior-modification techniques as unacceptable threats to individual autonomy and the democratic order.” Today, however, “the same practices meet little resistance or even discussion as they are routinely and pervasively deployed” to meet the financial goals of those engaging in surveillance capitalism.
This is such a powerful system for modifying our perspectives and behaviors, she argues, that it intervenes in or interferes with our “elemental right to the future tense, which accounts for the individual’s ability to imagine, intend, promise, and construct a future.” (Emphasis hers.)
Social media companies have claimed that their algorithms are intellectual properties, inventions, and trade secrets, all things that fall under the rubric of laws designed to advance and protect intellectual property and commerce.
In my book The Hidden History of Big Brother: How the Death of Privacy and the Rise of Surveillance Threaten Us and Our Democracy, I argue that algorithms should be open-source and thus publicly available for examination.
The reason so many algorithms are so toxic is because they are fine-tuned or adjusted to maximize engagement to benefit advertisers, who then pay the social media company, with little or no consideration for their impact on individuals or society.
Even more insidious, a billionaire social media company owner with a political agenda can program his algorithm to promote a particular politician, point of view, or a story that might help or destroy a politician or political party. Or even destroy a nation’s citizens’ faith in their government, media, or in democracy itself.
One way to get this under control is to require social media companies to ditch the algorithm and its associated advertising revenue model, and work instead on a subscription model with a modest fee.
Nigel Peacock and I saw this at work for the nearly two decades that we ran over 20 forums on CompuServe back in the 1980s and ’90s. Everybody there paid a membership fee to CompuServe and there was no advertising, so we had no incentive to try to manipulate their experience beyond normal moderation. There was no algorithm driving the show.
Replacing secret algorithms with subscriptions — or requiring they be publicly available in plain English so everybody can see how they’re being manipulated — would reduce the amount of screen time and the level of “screen addiction” so many people experience.
There’s an absolute consensus among both social scientists, psychologists, and political scientists that reducing algorithm-driven screen addiction would be a good thing for both individual mental health and the cohesion and health of our society.
But lacking a change in business model, the unique power social media holds to change behavior for good or ill — from Twitter spreading the Arab Spring, to Facebook provoking a mass slaughter in Myanmar, to both helping Russia elect Donald Trump in 2016 and 2024 — cries out for regulation, transparency, or, preferably, both.
Three years ago, Sen. Ron Wyden (D-OR) with Sen. Cory Booker (D-NJ) and Rep. Yvette Clarke, (D-NY) introduced the Algorithmic Accountability Act of 2022, which would do just that.
“Too often, Big Tech’s algorithms put profits before people, from negatively impacting young people’s mental health, to discriminating against people based on race, ethnicity, or gender, and everything in between,” said Sen. Tammy Baldwin (D-WI), a co-sponsor of the legislation.
“It is long past time,” she added, “for the American public and policymakers to get a look under the hood and see how these algorithms are being used and what next steps need to be taken to protect consumers.”
And — let’s not forget — to protect our democracy, our nation, and our planet.
The morbidly rich people who own our social media, focused more on adding more billions to their money bins than the consequences of their algorithms, don’t seem particularly concerned about these issues. Instead, they appear to be intentionally tweaking their algorithms to promote content that agrees with their political views and economic interests (although we can’t be sure because they keep them secret).
But it’s a safe bet that without the “enraging effect” of algorithmic amplification of outrage and hate, Donald Trump would never have become president, most Americans wouldn’t support brutal ICE tactics out of fear of brown people, and we wouldn’t today live in a nation where one in five households have stopped speaking with each other because of politics.
Right now, the Trump administration and Republican politicians don’t want to touch this subject because they believe Zuckerberg, Musk, and others who control the algorithms are using them to the GOP’s advantage.
But that sword can cut both ways, when public outrage reaches the point where it’s more profitable for the tech billionaires to promote anger against those in power than those currently on the outside.
It’s way past time to end the algorithmic manipulation of the American mind.
Pass it along (because the algorithm probably won’t).
 

