Social workers say the far right's adoption of 'pedophile' as an insult is hurting real victims

Labor leader Randi Weingarten is perhaps best known for her work as president of the American Federation of Teachers (AFT), the second largest union for teachers in the United States. Speaking to Salon, Weingarten made it clear that she cares deeply about the welfare of both the teachers and their students — but unwanted partisan politics keeps getting in the way.

This article first appeared in Salon.

"I'm a school teacher and a lawyer and a union leader, and I know when you deal with something that is so illegal, we need to protect the victim," Weingarten told Salon. "We need to believe the victim. So when all of a sudden this word — this thing that is so evil and so inappropriate and so horrible — gets used this much, it has no meaning anymore."

The word in question is "pedophile," or perhaps sometimes, "groomer." Amid a bitter culture war, many on the right, from playwrights like David Mamet to politicians like Rep. Marjorie Taylor-Greene, have seized on the word as an evidence-free way to discredit their culture war enemies. It is now being employed as one might use a slur like "idiot," albeit with a far darker connotation. And it has particular currency among believers in the QAnon conspiracy theory, who aver without evidence that a Satanic cabal of pedophiles is running the world.

Yet for actual victims of child sex abuse, however, the term "pedophile" or "groomer" recalls actual lived experiences — and for the people who spend their careers helping children, these words are used in ways that can only be described as unjustified and malicious.

RELATED: Jeffrey Epstein used his wealth to avoid his prison cell, empty out vending machines while in jail

Take Cory Bernaert, a Florida kindergarten teacher who has expressed concern that the state's new "don't say gay" bills will harm both his students and himself, as Bernaert is part of the LGBTQ community. After discussing his concerns on HBO's "Last Week Tonight with John Oliver" and a live interview on MSNBC, Bernaert was targeted by waves of abuse and harassment.

"The distrust that these LGBTQ students are going to have in their school and in their teachers is going to be magnified and there won't be a space place for them anywhere if they can't feel safe at school," Bernaert told Salon. "Where can they feel safe if they're already feeling unsafe at home?"

This, predictably, included the insults "pedophile" and "groomer." Not accusations, mind you — no one was seriously accusing Bernaert of a thing — but simply to be used as slurs.

"The major concern that I have for teachers is really their mental health and their mental stability, once these words 'pedophile' and 'groomer' are being used to describe them," Bernaert explained. "The reason being is any educator has devoted their life and really everything they have in their being to fostering a love for learning in children. It's very common for everyday people to have a complete misunderstanding of the actual work ethic and the amount of time that goes into being an educator."

When you add these built-in problems with dealing with the bigotry of being an LGBTQ teacher who gets slurred for nothing else than your identity and/or political views, Bernaert says it lowers morale — and that is just for the teachers. By smearing good teachers with these labels, it also hurts both children who are actually victimized by child sex abuse (whether in school or anyone else) and LGBTQ children who may have problems at home and need a safe space elsewhere.

Want more health and science stories in your inbox? Subscribe to Salon's weekly newsletter The Vulgar Scientist.

"The distrust that these LGBTQ students are going to have in their school and in their teachers is going to be magnified and there won't be a space place for them anywhere if they can't feel safe at school," Bernaert told Salon. "Where can they feel safe if they're already feeling unsafe at home?"

The problem with these false charges is not limited to our schools. Jennifer Thompson, the Executive Director of the New Jersey Chapter of the National Association of Social Workers, spoke with Salon about how these slurs negatively impact social workers.

"We are in a profession, and are called to be in a profession, of being helpers, inherently," Thompson told Salon. "We are trying to improve our clients' lives, help them with difficult situations and be the voices for communities who are often disenfranchised. We go in to be helpers and we are being villainized now. And I don't think we can underscore enough the emotional toll that takes on people who are already in a very difficult role."

Like Bernaert, Thompson noted that much of this rhetoric is used when social workers try to help LGBTQ children. She described the attacks as "politically motivated" and having "absolutely no basis in reality, science or fact."

"Social workers who are trying to protect the rights of LGBTQ children and youth in our schools, they are often associated with language like 'Oh, they are trying to groom our children,'" Thompson told Salon. "One that I heard on a webinar recently was that teachers and social workers alike who are creating safe spaces for children are told they're trying to indoctrinate our children into being something. And that's really harmful."

"It's so hard to walk away from a belief that somebody is a monster — that somebody is doing something so pernicious, that somebody is doing something so horrible that if you believed it already — how do you believe it anymore without feeling bad about yourself?" Weingarten told Salon.

Of course, there are real issues with child sex ring scandals — it's just that they don't typically happen within the spheres the right is targeting. Everything from the Jeffrey Epstein-Donald Trump parties with underage girls to the horrors within the Catholic Church reveal that child sex abuse is a rampant problem. Yet the people who draw attention to those scandals are often the same ones who work diligently to teach children in classrooms and protect them through careers as social workers. People who use terms "pedophile" and "groomer" as political insults, and particularly against people in those occupations, do so to the disservice of the people they ostensibly wish to help.

As other commentators and reporters have noted, the far right appears to be borrowing a rhetorical tactic pioneered by Russian President Vladimir Putin to discredit his own political opponents. Whether being used against teachers and social workers or ordinary liberals, the "pedophile" and "groomer" insults exist not to protect children, but to make monsters out of people simply because their social philosophy differs from one's own.

"It's so hard to walk away from a belief that somebody is a monster — that somebody is doing something so pernicious, that somebody is doing something so horrible that if you believed it already — how do you believe it anymore without feeling bad about yourself?" Weingarten told Salon. She noted ruefully that these disinformation tactics are already being used even in the most absurd situations, such as Putin convincing millions of Russians that Ukrainian President Volodomyr Zelenskyy is somehow a Nazi despite being Jewish. It is "all part of the autocracy playbook," Weingarten explained, and has no place in any kind of educational environment.

"Teaching is relational and teaching is all about creating trust with kids," Weingarten told Salon.

Did the Supreme Court just become 'political'? God, no — it's always been that way

The Supreme Court is, and always has been, a political institution. That would be self-evident if not for the mystique that has been built up around America's most important judicial body. That aura has started to dissipate — a recent Monmouth University poll found that more than half of Americans disapproved of the court's recent performance — but it remains powerful enough that people take Chief Justice John Roberts seriously when he bemoans the supposed politicization of the Supreme Court. Before his retirement, Justice Stephen Breyer even published a book urging Americans to return the high court to its supposedly august and apolitical roots.

This article first appeared in Salon.

Now that the justices are evidently poised to overturn Roe v. Wade, those who insist (or imagine) that the Supreme Court must somehow remain above politics have become even more strident: Pro-choice advocates argue that the impending decision proves that that the high court has strayed from its constitutional mission, while the anti-abortion contingent insists that since judges are above politics their reasoning is unassailable — and the presumed leaker has immeasurably damaged the institution.

RELATED: The fall of Roe v. Wade will only embolden the fascists: How will America respond?

These arguments are almost stunning in their historical ignorance. For one thing, the framers of the Constitution basically said nothing about the Supreme Court's mission, describing it simply as "one supreme Court." The Judiciary Act of 1789, passed during the first year of George Washington's presidency, fleshed out what the court would do, including assigning it six members (a chief justice and five associate justices; that number was officially expanded to nine in 1869). For more than a decade, however, the court took on few cases and had very little to do. The executive branch had proved strong under Washington and Congress quickly took on various legislative roles, but the judicial branch was initially unclear about exactly how much power it really had.

Chief Justice John Marshall understood something important: the appearance of putting partisanship aside would serve to legitimize more partisan decisions in the future.

Politics changed that. After John Adams lost to Thomas Jefferson in the 1800 presidential election, he decided to stack the judiciary with members of his Federalist Party so that Jefferson's Democratic-Republicans couldn't implement their agenda. Yet some of the justices' commissions were delivered prior to Jefferson's inauguration, and since the new president believed that nullified their appointments, he instructed Secretary of State James Madison not to deliver them. One such appointee, Maryland businessman William Marbury, sued Madison, claiming that his appointment was legal and the government should be required to follow through with it.

Want a daily wrap-up of all the news and commentary Salon has to offer? Subscribe to our morning newsletter, Crash Course.

Marbury likely believed that Chief Justice John Marshall, who was also a Federalist, would be sympathetic to his case; if so, he miscalculated Marshall's ability to play the long game. Apparently more intent on increasing his own power than aiding his political party, Marshall authored the landmark 1803 decision which agreed with Marbury that Madison's actions were contrary to law, but added that since the law involved was itself unconstitutional, it was not valid. So the precedent was established that the Supreme Court could strike down laws that it determined were in violation of the Constitution — which also launched the notion that the court was above politics.

Except it totally wasn't. What Marshall understood was that the appearance of putting partisanship aside would help legitimize the court's future decisions — even when they were blatantly partisan. (Arguably, the Roberts court's ruling that preserved the Affordable Care Act, while disappointing many conservatives, played a similar function.) In Marshall's case, this meant that the Federalist Party's remained relevant long after the party of Washington and Adams had faded away. Future justices sought to preserve the mantle of legitimacy Marshall had bestowed, even when they used it for very different causes.

Consider the most infamous Supreme Court decisions of the 19th century: Dred Scott v. Sanford in 1857 and Plessy v. Ferguson in 1896. In the first of those, the court ruled that an enslaved man in Missouri named Dred Scott could not claim to have been freed when his owners took him to Illinois and the Wisconsin territory, jurisdictions where slavery was illegal. In ruling against Scott, Chief Justice Roger Taney, an avowed white supremacist, found that people with African descent "are not included, and were not intended to be included, under the word 'citizens' in the Constitution," and as such had no legal rights. (As Salon's Keith Spencer recently noted, it is conceivable that people seeking abortions will face similar states' rights issues after Roe is overturned.)

Going one step further, the court ruled that the Missouri Compromise — an 1820 legislative agreement that sought to limit the expansion of slavery in newly-added states or territories — was unconstitutional. Of course the justices claimed this decision was based purely on legal issues, but the historical consensus holds that it was politically motivated. Incoming President James Buchanan, who supported the Southern slave-owner aristocracy even though he was from Pennsylvania, exerted pressure on the court to side with the pro-slavery faction, and probably heard about the decision from Taney in advance.

After Franklin D. Roosevelt was elected, the politically-motivated tendency to "find" reasons why laws regulating business were unconstitutional went into overdrive.

Politics again trumped the law in Plessy v. Ferguson, which required the court to rule on whether Louisiana had violated the 14th Amendment by segregating railroad cars. Since the amendment held that whites and Black Americans were equal under the law, this created a logical conundrum. Yet the justices, clearly motivated by a desire to avoid alienating white supremacists, evaded that common-sense argument and found that accommodations could be "separate but equal." The lone dissenter, John Marshall Harlan, called out the blatant political logic at play:

Everyone knows that the statute in question had its origin in the purpose not so much to exclude white persons from railroad cars occupied by blacks as to exclude colored people from coaches occupied by or assigned to white persons. Railroad corporations of Louisiana did not make discrimination among whites in the matter of accommodation for travelers. The thing to accomplish was, under the guise of giving equal accommodation for whites and blacks, to compel the latter to keep to themselves while traveling in railroad passenger coaches. No one would be so wanting in candor as to assert the contrary.

While those decisions upholding racial discrimination are the most obvious examples, politics has influenced numerous other Supreme Court decisions as well. While the Republican and Democratic parties have in many respects traded places as "liberal" or "conservative" formations since the 19th century, both have largely supported a social consensus favoring the interests of business over those of workers. It appears clear that when judges are appointed by politicians (in this case, nominated by the president and confirmed by the Senate), their philosophies are likely to be shaped by politics. The Supreme Court has a long history of handing out decisions unfavorable to labor organizing or working people, even if they are presented in neutral-sounding legal language.

For instance, the 1899 decision Lochner v. New York overturned a law setting maximum working hours for bakers on the grounds that it violated the right to freedom of contract; that supposed right came up again in 1923, when the court overturned a minimum wage for women in Adkins v. Children's Hospital. (That ruling, by the way, came under Chief Justice William Howard Taft, a former president. That's the only time a former president has been on the Supreme Court, although Taft's successor as Chief Justice, Charles Evans Hughes, was a former Republican presidential nominee.)

After Franklin D. Roosevelt was elected in 1932, the politically-motivated tendency to find reasons why laws regulating business operations were unconstitutional went into overdrive. There were four justices on the Supreme Court who clearly loathed FDR's policies, and were determined to short-circuit his agenda however they could. Nicknamed "the Four Horsemen," Justices Pierce Butler, James Clark McReynolds, George Sutherland and Willis Van Devanter viewed themselves as ideological crusaders on a mission to take down a president they perceived as a dangerous socialist.

Roosevelt tried to solve the problem in 1937 through what is now called "court-packing" — specifically, by adding a new justice each time a current one passed the age of 70 but refused to retire. We'll never know whether that might actually have made the Supreme Court less political, but in the event the plan blew up in Roosevelt's face. His only consolation came in the form of an unexplained change of heart by Justice Owen Roberts, who had previously opposed the New Deal but voted to uphold Washington state's minimum wage in the case West Coast Hotel Co. v. Parrish. That deflated Roosevelt's court-packing plan — and solidified the entirely fictional notion that the high court was above politics, or at least was supposed to be.

Yet not much the court has done since Roosevelt's era has made that notion more plausible than it was before 1937. In 2000, it installed George W. Bush as president in a 5-4 ruling that could not possibly have been more nakedly partisan. A decade later, in Citizens United v. FEC, the high court's conservative justices managed both to side against Hillary Clinton and assert that corporate campaign expenditures were effectively political speech, and could not be regulated under the First Amendment.

More recently, of course, the Supreme Court confirmation process has become the focus of Machiavellian politics, largely because of Senate Republican leader Mitch McConnell, who refused to consider Barack Obama's nominee in 2016, arguing that it was an election year, but pushed through Amy Coney Barrett's 2020 nomination just days before Joe Biden was elected. Add to that the firestorm that surrounded Brett Kavanaugh's confirmation in 2018, and it's almost bizarre that anyone can pretend the court is not infused with politics. Those three justices nominated by Donald Trump, of course, have created the conservative supermajority that has led to the near-certain downfall of Roe. That makes the court appear more political than ever before, perhaps — but appearance is not the same thing as historical reality.

This is what it was like trying to get an abortion in the United States before 1973

On May 2, 2022, an anonymous whistleblower informed Politico that the Supreme Court is planning on overturning Roe v. Wade. If this indeed comes to pass as expected, the landscape of reproductive rights and abortion access in the United States would shift radically overnight, with state governments deciding individually whether to make abortion outright illegal. In many ways, those states that outlawed abortion would resemble their counterparts in the pre-Roe era (meaning before 1973).

This article first appeared in Salon.

Indeed, the country before Roe v. Wade was a bad one for Americans who needed to exercise their reproductive rights. Looking back at that world from the vantage point of 2022, it is clear technology has advanced enough that (at least medically speaking) it has become much safer to terminate a pregnancy. Even so, it is useful to look at what life was like in America before the 1973 Supreme Court decision that protected reproductive rights, since it offers a glimpse into what might happen in America after the Supreme Court's Republican wing prevails in overturning Roe v. Wade.

"Like everything else in American society, what your chances were of coming out alive or not injured were very much a function of class and race," sociologist and reproductive rights activist Carole Joffe told Salon. Americans who were white and had ample financial resources might still struggle to obtain an abortion, but their plight was almost always much easier than that of someone who lacked money or came from a marginalized racial group. What's more, if you sought an abortion, there were three types of doctors that you might encounter — and you didn't want to end up with the wrong one.

"There was a wide variety of illegal practitioners, some very competent and very decent," Joffe explained. "These are people who in my scholarship I've referred to as 'doctors of conscience.' These were people — mainly men, some women — who were making it in mainstream medicine and who decided to do abortions, literally, as a matter of conscience. They saw the ravages of either self-induced abortion or abortion done by very incompetent practitioners. They saw the ravages of this in emergency rooms. So they decided that they knew how to do it safely. And they did it, obviously, with some risk. They had no idea whether they would be caught, whether they would lose their license, whether they would go to jail."

RELATED: The Christian right didn't used to care about abortion — until they did

Another group, whom Joffe dubbed the "butchers," were the incompetent and unethical hacks who took advantage of a patient's desperation and often left them maimed or worse. Even the ones who were trained health professionals frequently, for various reasons, had failed in their mainstream medical careers.

Finally there were what Joffe described as "sort of a third middle ground — people who were not especially political, who did not necessarily care deeply one way or another about social justice issues, but who quietly did abortions for money. They don't dominate anybody's imagination now."

Want more health and science stories in your inbox? Subscribe to Salon's weekly newsletter The Vulgar Scientist.

It is worth noting that the chaotic state of abortion rights in mid-20th century America was not, as some on the Supreme Court have implied, a longstanding tradition. When the Founding Fathers developed the Constitution, they intentionally stayed quiet about all medical procedures — including abortions. Because they viewed abortion as falling into the same category as any other health matter that a citizen might privately discuss with a doctor, they deferred to the common law assumption that neither the courts nor lawmakers should intervene. James Wilson, who wrote the preamble to the Constitution, summed up this perspective during a lecture in 1790 (three years after the Constitution's ratification) where he deferred to an English common law expert's view that "in the contemplation of law, life begins when the infant is first able to stir in the womb."

"Stir in the womb" refers, in this context, to the period of quickening, one that can occur as early as 16 weeks and as late as 25 weeks into a pregnancy.

Setting a precedent that has continued to the present, the first major attempts to regulate abortion came from individuals who wanted to subjugate women. In the mid-19th century, a physician named Horatio Storer began to call for the national regulation of abortion as part of his belief that male doctors and not female midwives should populate the field of medicine. There were also concerns that Protestants would have abortions and Catholics would not, enabling immigrants from Catholic countries to out-populate whites. Finally there were people who wanted to regulate abortion so that slaves could be forced to bear children.

By the mid-20th century, these various strains of prejudice had led to a state of chaos. By 1971, only six states and Washington D.C. legally allowed abortions. Although New York had only legalized the procedure one year earlier, the state quickly developed a reputation as the best place to go for people who needed abortions; in 1971, 84 percent of abortions, when performed for patients who lived in another state, took place in New York. As an Ohio college freshman from the time named Pamela Mason later recalled, "I was very relieved because New York was doable. It was 500 miles away."

Overall, experts believe that 400,000 abortions were performed in New York during the brief period from 1970 to 1973 when it was one of the few states that legally permitted it. It is estimated that two-third of those abortions were for out-of-state patients.

But how many Americans received abortions overall during this time?

"Scholars will probably never be able to answer that question with precision precisely because the procedure was illegal," Karissa Haugeberg, assistant professor of history at Tulane University, told NPR in 2019. "But scholars estimate that between 20% and 25% of all pregnancies ended in abortion before Roe v. Wade." Haugeberg added that roughly 200 women would die from botched abortions every year during the period immediately preceding Roe v. Wade. In addition to succumbing to shoddy abortionists, many women would die through self-induction like trying to throw themselves down stairs, taking poisons or using coat hangers.

In a brief filed to the Supreme Court in December during arguments over the current abortion case, Dobbs v. Jackson Women's Health Organization, 154 economists and researchers pointed out that Roe v. Wade helped create order out of this chaos and materially benefited Americans in a number of ways: It reduced teen motherhood by 34%, teen marriages by 20%, improved rates for women achieving education and increased women's wages.

"Women continue to rely on abortion access to plan their reproductive, economic, and social lives," the brief says. "Causal inference tells us that abortion legalization has caused profound changes in women's lives. But those changes are neither sufficient nor permanent: abortion access is still relevant and necessary to women's equal and full participation in society."

Jimmy Carter's landmark moment: The birth of the disability rights movement

Standing up could be intensely painful for Franklin D. Roosevelt, but he was determined that the public would never know.

Roosevelt was paralyzed from the waist down in 1921 — it's not clear whether that was caused by polio or Guillain–Barré syndrome — and went to great lengths to conceal his disability. Aside from his inner circle of confidants and journalists who covered his political career, few Americans ever understood that FDR, who became the longest-serving president in our history, could not stand or walk unaided. He stood upright only with clunky, awkward braces that placed constant pressure on his abdomen. Yet he bore this burden silently, and when necessary went above and beyond the non-disabled people around him.

This article first appeared in Salon.

One incident stands out: A moment during the 1932 presidential when Roosevelt remained standing during an unexpected rainstorm, even as everyone around him ran for cover. This was more than showmanship, although it certainly helped his public image as he campaigned to defeat President Herbert Hoover that year. Roosevelt, at the time governor of New York, also remained standing in order to doff his hat to every National Guard platoon commander who passed him on parade. More than a quarter-century later, a Hungarian Jewish immigrant who was a corporal in the 71st Infantry at the time wrote a letter to Eleanor Roosevelt recalling what he had observed (the former First Lady replied with gratitude):

I noticed that all of the occupants in the car had covered their heads or were endeavoring to ward off the rain, but there stood President Roosevelt, hat over his heart, the rain beating down over him, never flinching, every inch a man. This was an example of military, moral and spiritual courage. I have never forgotten it.

That infantry corporal almost certainly didn't know that Roosevelt had a significant disability, and was essentially standing there locked in place. Disabled people throughout history have had to make those kinds of sacrifices, or cover up their limitations even at risk of pain or injury, simply to function in society.

RELATED: "Autism is not a disease — it's a disability": Journalist Eric Garcia debunks autism myths

Roosevelt's suffering was not for nothing. Among other things, he founded a comprehensive disability treatment facility in Warm Springs, Georgia, which today is named for him. Over the years, Warm Springs has become a political mecca for aspiring public officials and others who wish to express support for the disability rights movement. When another Democrat, Jimmy Carter, decided to establish himself as a supporter of disability rights during the 1976 presidential election, he visited Warm Springs to make that promise. After he was elected president, however, Carter often struggled to implement key parts of his agenda, and disability rights was no exception. So disabled people had to step in to do the work that the non-disabled simply couldn't get done on their own.

Prior to Carter's presidency, disability activists had struggled to convince government officials to take their concerns seriously. When Congress in the early 1970s passed a law to expand states' abilities to provide comprehensive vocational rehabilitation services, a staffer at what was then called the Department of Health, Education and Welfare (HEW) added a few sentences protecting disabled individuals from discrimination. No one expected that to be controversial; it was worded to resemble the Civil Rights Act of 1964 and merely merely extending that bill's logic and language to another marginalized group. Yet this provision, known as Section 504, would become the cornerstone of a massive controversy and an entire movement. It stated that any entity that received federal funding could not discriminate against disabled people, at risk of losing that funding.

Richard Nixon and his successor, Gerald Ford, weren't especially sympathetic to Section 504, and supported the arguments of business advocates who claimed it was unenforceable and overly expensive. At first Republicans wanted to kill the bill entirely, with Nixon vetoing two versions, in 1972 and 1973, because he thought the Section 504 language was too strong. After the first veto, a group of 80 disability activists in New York, many in wheelchairs, protested by stopping traffic on Madison Avenue. Eventually there was enough pressure from Democrats (who controlled Congress at the time) to compel Nixon to sign the law, but both he and Ford delayed implementing it. Even after a judge ordered the Republican administration to cease "unreasonable delays," regulations needed to implement the law remained unsigned through the end of Ford's tenure.

Want a daily wrap-up of all the news and commentary Salon has to offer? Subscribe to our morning newsletter, Crash Course.

Activists were heartened when Carter took office because, starting with his visit to Warm Springs in 1976, he had cast himself as their ally. While Carter's heart was in the right place, he often waffled when it came to following through on his goals — not a problem of intention, perhaps, but conceivably one of competence. Carter wound up passing the buck on Section 504 to his new HEW head, Joseph Califano, formed a task force on the issue — that included no people with disabilities.

Around that time, activist and advocacy groups run by people with disabilities were emerging, and a number merged into the American Coalition of Citizens with Disabilities. The coalition found out that Califano's task force was likely to weaken the regulations needed to be implemented Section 504, in an effort to strike a compromise with the business community. So the ACCD told the Carter administration it would launch a series of protest actions if the existing regulations weren't signed unchanged.

"I think this was brilliant, because rather than waiting until watered-down regulations were issued publicly and then responding, issue by issue, this meant the government would have to respond to the demonstrators," wrote Disability Rights Education & Defense Fund activist Kitty Cone, who participated in the protests. "Additionally, it was not that easy to organize people, particularly people with physical disabilities, in those days, due to lack of transit, support services and so on. A sit-in meant people would go and stay, until the issue was resolved definitively."

So began the 504 sit-ins, in which activists organized by the ACCD either occupied or picketed HEW offices in Washington, Atlanta, Boston, Chicago, Denver, Los Angeles, New York, Philadelphia and Seattle. Like Roosevelt forcing his body to remain standing in the rain, the protesters willingly underwent physical and emotional pain because they believed they were doing so for a just cause. The most notable protest was a San Francisco sit-in that lasted more than three weeks. One of the organizers, activist Judith Heumann, recalled that "at the start of our demonstration at the HEW offices, officials treated us with condescension, giving us cookies and punch as if we were on some kind of field trip."

Once authorities realized that the protesters really weren't going to go home, the magnitude of the moment became clear. This was a highly diverse group of people with different kinds of disabilities: injured Vietnam veterans, lifetime wheelchair users, deaf people, blind people, individuals with mental disabilities. Asking the cops to roust them out of the building would have been terrible optics. Carter's administration was in a bind. More important still, this was effectively the first moment that Americans in general were compelled to confront their condescension and lack of comprehension when it came to disabled individuals. Cookies and punch weren't going to cut it.

As one activist later recalled, "discomfort and anxiety was the order of our day to day existence. Everyone faced these questions, How can I get my meds? Where will I sleep? What about food?" People lost their sense of privacy as they performed intimate personal care in close proximity to strangers. Two protesters suffered long-term exacerbation of their multiple sclerosis symptoms because of going for a prolonged period without treatment. People had to live without catheters, back-up ventilators and medicines. The food problem was mitigated for one reason: Black Panthers stepped in and made sure the protesters, at the very least, would not starve.

It worked — and to a greater degree than most activists expected. In the most immediate triumph, the Section 504 regulations were signed intact on April 28, 1977. From that moment on, the climate around disability in America was forever transformed: No longer could an employer fire a disabled person just because having them around was too much trouble, or made others uncomfortable. While the stereotype of disabled people as helpless is still with us, the Section 504 protesters demonstrated that they could fight for their rights just like anyone else. Furthermore, they accomplished that by establishing an alliance based on the premise everyone with a disability was in the same boat, regardless of the specifics. Finally, Section 504 laid the foundations for the landmark bill that would arrive more than a decade later, the Americans with Disabilities Act of 1990.

I am an autistic person with a hand-eye coordination disability, and am among the millions of beneficiaries of the Americans with Disabilities Act, and by extension of Section 504. I am also the grandson of the Hungarian Jewish immigrant who saw Franklin D. Roosevelt standing through that rainstorm, and was so impressed awed in 1932 when he witnessed Roosevelt's inspiring act of physical sacrifice during that rainstorm. I never met Laszlo Rozsa, who died three years before I was born. I know he was head of the proof room at the New York Times, working the "lobster shift" and continuing the family tradition of newspaper jobs that began with his father, a Budapest printer named Mores Rozsa. Laszlo's only child, Lance Rozsa, worked in education rather than journalism (he became a school superintendent in New Jersey). My passion is for both: I'm a PhD candidate in history at Lehigh University and I've been a journalist for 10 years (and a staff writer at Salon for six).

When I interviewed Jimmy Carter in 2018, I thanked him for his support of Section 504 and for what he had done for the disability community. When I asked him what advice he had for younger Americans today, he responded: "Never give up, and follow the advice of my school teacher: 'We must accommodate changing times but cling to principles that do not change.'"

The cult of Elon Musk: Why do some of us worship billionaires?

Less than 24 hours after agreeing to purchase Twitter, Tesla CEO Elon Musk may have already broken the deal which allowed him to perform a hostile takeover of the social media company. Although one of the terms is that he may only tweet about the acquisition "so long as such tweets do not disparage the Company or any of its Representatives," he posted two tweets on Tuesday which parroted right-wing talking points that attacked specific employees.

Normally there would not be many individuals applauding a wealthy CEO who purchased a company and then immediately attacked vulnerable employees, almost certainly knowing that doing so would instigate mass harassment against said employees (which is exactly what happened). In normal contexts, such a person would be classified as nothing more than a bully. Then again, when you are a billionaire with a cult of personality, there will always be people who applaud your actions.

How does a supercilious, uncharismatic billionaire bully attract a horde of ardent fans? According to experts, it all comes down to basic tenets of human psychology. Many people fantasize about being billionaires, so when they root for Musk, they're really rooting for what they perceive as a version of themselves — namely, as masters of the universe, "winners" in every sense that mainstream society deems worthy. In the process, they also reveal their own deep feelings of inadequacy.

"Most people aspire to a lifestyle that they're not willing to work for or that they can't afford," explained Dr. Tara Bieber, a neuroscientist at the Massachusetts Institute of Technology. In her interview with Salon, she emphasized that she was speaking from a strictly scientific perspective; this was not a question of any individual's political beliefs. It was, instead, a manifestation of the same trends that has caused past billionaires to amass cults of personality alongside their dollars: automotive entrepreneur Henry Ford, business magnate Howard Hughes, and more recently Apple founder Steve Jobs. Each of them possessed an undeniable charisma that drew people to them, and each carefully cultivated a public image consistent with the aspirational values of their time.

And, unsurprisingly, they also checked the right demographic boxes to benefit from various forms of societal privilege. For one thing, they are almost always white. For another, they are almost always male.

"One immediate commonality that I see is that all of these famed, admired, and perhaps infamous business leaders are male, and the stories we tell about them reflect an admiration for prototypical male qualities," Karen M. Landay, PhD, Assistant Professor of Management at the Henry W. Bloch School of Management at the University of Missouri-Kansas City, told Salon by email. Yet it is not the maleness that allows them to develop a billionaire cult of personality; that is only their foot through the door.

The next step is having psychopathic traits.

"I don't want to comment on whether [Musk] is one or not, because he's not my patient," Bieber told Salon. "But some of the psychopathic traits are being very charming, being very persuasive, being fearless and ruthless." All of these qualities were attributes to people like Ford, Hughes, Jobs and Musk, and each one can work to the benefit of society — if channeled correctly.

"Basically the difference between what we would call a psychopath and people that we admire is like a surgeon or a killer, a judge, or a gangster, they may have some of the same characteristics, but are either at a different intelligence level or they're doing things that are actually unacceptable to society," Bieber explained. As Landay explained, psychopathic tendencies consist of three personality traits: boldness, "such as interpersonal dominance"; lack of empathy and a tendency toward being mean; and disinhibition, "such as impulsivity."

"Essentially, individuals with psychopathic tendencies have the potential to be much a much worse than average jerk, yet because of those very qualities, it's plausible that they might find great success in business organizations," Landay told Salon. These can be used to benefit humankind — or only to glorify the billionaire's own ego. In the case of Musk taking over Twitter, the exhilaration from his supporters seems to stem both from a belief that he will help right-wing causes and from the sense that Musk can say or do whatever he wants without consequences. It is a dream come true for them, albeit lived out by another man.

Nor is that the only fantasy Musk is living out for these admirers.

"They're being fed the messages from society that you should be rich," Bieber told Salon. "You should have a nice car. You should have a beautiful girlfriend. And so they look at him and he's got those things and they want to be like him." Since they cannot actually acquire those things — and, if they try to create a poor facsimile in their own lives, will almost certainly know on some level that it is fraudulent — they respond in toxic ways.

"Unfortunately, I think it gives some people permission to behave badly to say mean things on social media, to treat their family members badly," Bieber explained. Even though they are not Musk and will never be Musk, "they'll take the aspects of his behavior and personality that they can play out and they'll do those in their real life."

If it seems like there is a macho subtext to all of this glorification, that isn't a coincidence.

"Interestingly, my own research on psychopathic tendencies revealed that when men and women engage in similar behaviors indicative of psychopathic tendencies, while men are rewarded, women are punished," Landay explained. "That is, men displaying these bold, mean, disinhibited behaviors are more likely to become leaders and be viewed as effective leaders, whereas women displaying those same behaviors are less likely than men to become leaders and more likely to be viewed as ineffective leaders."

Emma Haslett of The New Statesman used a similar lens to analyze Musk's behavior in a November article, one that assessed how Musk has leveraged his cult of personality into a volatile asset for his business brand.

The answer lies, at least in part, in Musk and his unfiltered personality. The New York Times described him as "at once a capitalist hero, a glossy magazine celebrity and a bomb-throwing troll". His communiques – like the "Tits university" and its "epic merch" – have given him cult-like status. He has smoked weed on a podcast, he tweets whatever he wants (including unsubstantiated accusations of paedophilia), and in 2018 he caused outrage (and a drop in shares) when he bemoaned analysts' "boring, bonehead" questions. Traditional investors see him as dangerously volatile – but his followers regard him as relatable and refreshingly down-to-earth.

At the end of the day, the cult of Elon Musk can best be understood using the same lens that Musk himself seems to apply to his day-to-day life: self-interest.

"Those who benefit from Musk's behavior will celebrate it, whereas those who don't (or perceive some loss due to his behavior) will decry it," Landay wrote to Salon. "In the case of Musk's purchase of Twitter, because of events such as the infamous ban of Donald Trump, based on Musk's prior comments, people on the right of the political spectrum are likely expecting a benefit in the form of loosening those restrictions and possibly a return of Trump's famously erratic Twitter behavior. For people on the left of the political spectrum, Trump's ban has been a welcome reprieve, so with Musk's ownership of Twitter, they're likely expecting to lose that reprieve."

The probability of life on Jupiter's moon Europa just got a lot higher

Human beings have long looked to the stars and hoped that alien life might look back at us. Yet the truth is that the first extraterrestrial life we discover is far more likely to be microbial — a prospect less romantic perhaps than the idea of bipedal aliens shaking hands with humans after landing on Earth.

This article first appeared in Salon.

Such microbial life has been theorized to have existed in the early days of Mars, before its water dried up, though we still don't know for certain. Now, astrobiologists are turning their gaze towards another nearby neighbor, Europa — an icy gray moon of Jupiter — as a suddenly much more alluring candidate for simple life.

"Early microbial life on Earth evolved in the liquid salt water environment of our oceans — which is what makes the hint of salt water on Europa so tantalizing.

Renewed interest in Europa's potential to harbor life stems from a new study about the peculiar moon. The subject of curiosity is the giant ridges that criss-cross the planet's surface like scratches on a cue ball. Underneath those ridges, explain the authors of a new paper in the journal Nature Communications, there may be pools of salty, liquid water. And since those ridges are ubiquitous, that means the pools could also be commonplace.

Of course, early microbial life on Earth evolved in the liquid salt water environment of our oceans — which is what makes the hint of salt water on Europa so tantalizing. The unique geography of Europa also happens to very much resemble Northwest Greenland, which is the other half of what the study concerns.

RELATED: If Perseverance finds life on Mars, this is what it will look like

"Here we present the discovery and analysis of a double ridge in Northwest Greenland with the same gravity-scaled geometry as those found on Europa," the authors explained. "Using surface elevation and radar sounding data, we show that this double ridge was formed by successive refreezing, pressurization, and fracture of a shallow water sill within the ice sheet. If the same process is responsible for Europa's double ridges, our results suggest that shallow liquid water is [ubiquitous] across Europa's ice shell."

Want more health and science stories in your inbox? Subscribe to Salon's weekly newsletter The Vulgar Scientist.

Europa is not a particularly large world; a mere 2,000 miles in diameter, it is not even as large as Earth's own moon. Yet Europa's surface is unique, festooned in giant double ridges that can tower as high up as 1,000 feet into the air.

When a team of scientists at Stanford University learned about Europa's double ridges, they decided to study smaller geological structures in Greenland's northwest. More specifically, they studied the little double ridge feature in Greenland and learned how it was formed. It turned out that they came into existence because shallow pools of water beneath the surface first froze and then wound up breaking through on multiple occasions. This repeatedly pushed up the twin ridges.If the analogous ridges on Europa were formed the same way, as seems probable, the constant churning could have helped bring about the chemical reactions necessary to create life. It is an intriguing premise, to say the least, and is part of a long history of astrobiological interest in Europa.

"Gravity measurements also tell us that below this ice/water layer is a layer of rock and then a metallic core at the center," Phillips added. If you want there to be life in the universe, these are all good signs, as they suggest the basic ingredients could exist on the enigmatic moon.

"Scientists know from a combination of observations by Earth-based telescopes and spacecraft such as Galileo that the surface of Europa is covered primarily with water ice," Dr. Cynthia B. Phillips, Europa Project Staff Scientist and Science Communications Lead from the NASA Jet Propulsion Laboratory, told Salon by email. Astronomers estimate that Europa's surface has the same density as water ice and is roughly 100 kilometers thick, but the gravity measurements used to obtain that estimate do not answer questions about exact composition. How much of this is solid ice and how much of this is liquid water?

"Gravity measurements also tell us that below this ice/water layer is a layer of rock and then a metallic core at the center," Phillips, who was not involved in the most recent study, added. If you want there to be life in the universe, these are all good signs, as they suggest the basic ingredients could exist on the enigmatic moon.

"There are three things needed for life as we know it," explained Dr. Christopher Chyba, Professor of Astrophysical Sciences and International Affairs at Princeton University, in an email to Salon. In addition to liquid water and a source of useable energy, you need "the so-called biogenic elements" — like carbon — "that our kind of life is based on," plus a source of useable energy. "NASA's strategy for searching for life has long been 'follow the water,' and Europa and Enceladus in our Solar System are the two places, besides Mars, where we have a lot of evidence for liquid water that is probably accessible to exploration," Chyba, who was not involved in the study explained.

Chyba said it would be "bizarre" if Europa did not form with the "usual complement" of biogenic elements one finds on celestial bodies, "but even if Europa somehow formed without them, the late Betty Pierazzo showed that Europa would have accumulated a significant inventory of them over Solar System history from comet impacts." Pierazzo was a researcher at the Planetary Science Institute who specialized in impact craters.

Dreamers of Europa-pean life can also take heart in the magnetic field results, which gave strong evidence of an "induced field" while Europa orbits around Jupiter, which has a very strong magnetic field of its own. What accounts for Europa's magnetic field?

"The best explanation for the source of this induced field is a global salty water ocean," Phillips told Salon. "We think that Europa has the ingredients for life as we know it — more liquid water than all of Earth's oceans combined, plus the right other chemical elements and an energy source. On Earth, we find life wherever we have these three ingredients, so we think that Europa is one of the best places to look for life in our solar system beyond the Earth."

Chyba echoed this view when he wrote that it is "possible, based on what we know so far, to imagine types of microorganisms that could live in Europa's ocean." There was the important caveat, though, that we are dealing with the truly alien — we cannot know for sure that life can only develop as it has on Earth because we do not fully understand what causes "life" to exist in the first place.

"We don't know if there is life there or not, because we don't have enough of an understanding of the origin of life (on Earth or anywhere else) to say whether Europa's conditions would have favored the origin of life," Chyba observed.

Lessons of the radical Republicans

Once upon a time in a country somewhat resembling this one, the Republican Party had a radical faction — and not because it believed in bizarre theories about election fraud or wanted to undermine democracy. By modern standards, the Radical Republicans of the 1860s would clearly be regarded as leftists: They fiercely supported racial equality, had no tolerance for insurrectionists and believed government should help the most vulnerable people in society. Their story is important for many reasons: They helped shape modern-day America, and they may even provide clues about how it can be saved.

This article first appeared Salon.

As with so many great stories in American history, this one begins with Abraham Lincoln.

After Lincoln won the contentious presidential election of 1860 — in several states, his name wasn't even on the ballot — slave-owners across the South convinced that this meant the end of their "peculiar institution" decided to secede from the Union. That provoked the Civil War, of course, but as you probably know, it didn't immediately lead to the end of slavery. Indeed, for almost two years, many Republicans harshly criticized their own party's president for moving too slowly on that issue. Even after Lincoln issued the Emancipation Proclamation, the so-called Radical Republican wing noticed his loophole: It only applied to enslaved peoples in the rebellious states; those in states that had not seceded, such as Kentucky, Maryland and Missouri, were still in shackles.

RELATED: Are Democrats the "real racists"? Well, they used to be: Here's the history

Lincoln was unambiguous, however, in his contempt for rebels. Whether or not it's fair to compare the attempted coup of 2021 the insurrection that began in 1860, Lincoln viewed the latter as straightforward treason. If citizens in a democratic society are permitted to rebel simply because they dislike the results of an election, he reasoned, then democracy itself cannot endure. One law passed with Lincoln's support banned former Confederate leaders from holding political office of any kind — and even there, many of the Radical Republicans felt he was being too lenient.

Want a daily wrap-up of all the news and commentary Salon has to offer? Subscribe to our morning newsletter, Crash Course.

With the surrender of the Confederacy and Lincolin's assassination in 1865, everything changed. The new president was Andrew Johnson, a former Tennessee senator and an avowed white supremacist, although he had remained loyal to the Union. Although passage of the 13th Amendment had ended slavery for good, Johnson gave the formerly rebellious states so much leeway that the plight of newly emancipated Black people was not much better than it had been before. The Radical Republican cause was clearly on the back foot — until they struck back.

After Johnson vetoed the Civil Rights Act of 1866 — which made it illegal to deny someone equal citizenship based on color — Republicans overrode his veto, the first time that had ever happened with a major piece of legislation. Radical Republican commentators and orators toured the land, presenting Lincoln as a martyred hero (despite their tepid attitude toward him in life) and insisting that Johnson was disgracing his memory. By the time the 1866 midterm elections rolled around, they had created conditions for what we would now call a "wave election."

Once in control of Congress, the Radical Republicans made what must be regarded as a grave error: They impeached Johnson for purely political reasons. Frustrated at the president's intransigence and bigotry, leading Republicans like Sen. Benjamin Wade of Ohio and Rep. Thaddeus Stevens of Pennsylvania set a trap for Johnson. They passed a blatantly unconstitutional law that restricted the president's power to remove certain officeholders without the Senate's approval. Then they waited until Secretary of War Edwin Stanton (a Lincoln holdover and Republican ally) disobeyed Johnson's orders and was dismissed, using that as the pretext to begin impeachment proceedings. Johnson was impeached in the House but avoided Senate conviction by one vote. Today, the legal consensus holds that this was nearly a dreadful miscarriage of justice: Johnson was a bad president, but the Republicans had no legitimate reason to remove him from office.

The Andrew Johnson impeachment was a major blunder, but the Radical Republicans were on the right side of history a hell of a lot — and it would take almost 100 years for America to catch up.

Aside from that, and their later willingness to turn a blind eye to the various scandals of President Ulysses S. Grant's administration, the Radical Republicans were on the right side of history a hell of a lot. In 1871, they pushed for a new Civil Rights Act that allowed Grant to suspend the writ of habeas corpus to fight the Ku Klux Klan and other white supremacist groups. With Grant's support, they pushed through a series of laws in 1870 and 1871 that tried to undo racial discrimination as much as possible. These Enforcement Acts were meant to guaranteed that Black men could vote, serve on juries and hold office, and were entitled to equal protection under the law. (Women of any race had few political rights, and that didn't begin to change until the end of the century.)

This was of course the period of progressive reforms and potential racial reconciliation known as Reconstruction. But then came the Depression of 1873. As usually happens during economic downturns, the incumbent party was blamed, but the 1874 midterm elections were no ordinary contest. In addition to the usual economic resentments, many Americans in the North wanted to put the Civil War in the past and end the de facto military occupation of the Southern states. Instead of blaming racial terrorists like the KKK for the continuing conflict, some blamed the Radical Republicans. The result was a massive swing to the Democrats, who defined themselves as a populist party representing the interests of working people both North and South — as long as they were white. In that election, Democrats picked up 94 House seats (out of just 293 at the time) and held a majority for 12 of the next 14 years.

One more shoe needed to fall, and that was the tortuous presidential election of 1876, in many ways an eerie precursor to the 2020 contest. As I observed two years ago, the 1876 election had the highest voter turnout rate in American history, at 81.8%, while the 2020 election had the highest turnout (about 66%) in 120 years. At least in 2020 only one side tried to cheat, whereas in 1876 both sides did.

Republicans hoped their candidate, Rutherford B. Hayes, could keep them in power another four years despite the apparent turning of the tide. It's still not clear whether Hayes or Democrat Samuel J. Tilden would have won a free and fair contest, but that tainted and deadlocked contest ended in the Compromise of 1877, in which Hayes won the White House at the cost of ending Reconstruction and effectively allowing the South to launch the Jim Crow regime of racist oppression and legal segregation

What are the lessons of the Radical Republican experiment? That depends on your point of view, of course, but we might conclude that sacrificing one's core political principles in the interest of winning elections never ends well. It would take nearly 100 years for America's political reality to catch up to the Radical Republicans' policies, but they set an important example that has fascinated historians and progressive activists ever since. A second and perhaps more difficult lesson is that in politics you have to expect the unexpected, such as economic downturns and currents of domestic unrest. We are not quite halfway through Joe Biden's first term as president, and after the Afghanistan withdrawal, two new waves of COVID and the war in Ukraine, that lesson seems to be harshly reinforced every single day.

Pluto wasn't the first: A brief history of our solar system's forgotten planets

A kindergartener in 2005 and a kindergartener in 2006 would have learned very different facts about the number of planets in the solar system. 2006, of course, was the year Pluto was reclassified as a dwarf planet — a move that sparked outrage among a public that tends to romanticize our solar system.

This article first appeared in Salon.

But long before the Pluto "controversy," other objects moved on and off the official list of solar system planets. Indeed, a kindergartener in the early 1800s would have learned that Ceres was a planet.

So while the argument over planethood might seem like a modern astronomical debate, 19th century astronomers were bedeviled by this of question of how to define what actually counts as a planet.

And, as alluded to, Ceres predates Pluto in his controversy. The asteroid belt, which sits roughly between Mars and Jupiter, is filled with minor planets and asteroids. One of those celestial bodies, Ceres, has a surface covered in minerals like clay and carbonates, as well as water ice. It is an odd world, to be sure: because it is not completely frozen and is covered in salt water, scientists believe Ceres could harbor microbial life. This place Ceres in stark contrast with Pluto, which is on the far side of the solar system and has an entirely frozen surface. In addition, whereas Ceres is a dull monochromatic gray, Pluto's colors range from white and black to vivid orange.

Yet Ceres and Pluto have one very important thing in common: Astronomers at one point thought they should be classified as planets, but then changed their mind. It all comes down to size, which in the case of planetary science really does matter.

Flashback to the beginning of the 19th century. An Italian priest and astronomer named Giuseppe Piazzi at the Palermo Observatory had answered a nearly three-decade old question: Why did the orbits of Mars and Jupiter indicate that a planet existed between them even though none could be found? On Jan. 1, 1801, Piazzi seemed to answer this question by announcing that he had found a "star" which had moved from its position in the Taurus constellation. Scientists soon concluded that this must be the missing planet and assumed the matter was resolved.

Then another "planet" was discovered. On March 28, 1802, German physician and astronomer Heinrich Olbers discovered Pallas; this was rapidly followed by Juno in 1804 and Vesta in 1807. Each was duly designated as a planet, although astronomers began to have their doubts that this increasingly-cumbersome system was working out. Although scientists were given a breather for a few decades, a plethora of new discoveries between 1845 and 1852 left the astronomical community with 15 asteroids to account for. None of the new ones were labeled as planets, but it was becoming increasingly clear that reforms would be necessary. By 1867, it was clear that Ceres was too small to be grouped in with a body like Earth, and so it was given a new designation: Minor planet. And instead of being given fancy names and symbols, they would be labeled with numbers based on when they were discovered or their determination of orbit.

This bring us to Pluto. While Ceres has a diameter of 588 miles (compared to the Earth's 7918 mile diameter), Pluto has a comparatively heftier diameter of 1477 miles. Yet this did not save Pluto from getting the axe as a planet when the International Astronomical Union met in 2006. The reason was, quite simply, that astronomers had decided that there were three criteria for being considered a planet:

So, the three criteria of the IAU for a full-sized planet are:
It is in orbit around the Sun.
It has sufficient mass to assume hydrostatic equilibrium (a nearly round shape).
It has "cleared the neighborhood" around its orbit.

Because Pluto did not meet the third requirement — it has not "cleared the neighborhood" around its orbit — it lost its status as a planet. Clearing the neighborhood means that the region of space near which it orbits the sun is bereft of larger bodies, having been absorbed into the planet. Ceres, like Pluto, clearly doesn't pass this criteria: the asteroid belt in which Ceres resides is evidence of a "failed" planet that didn't clear its neighborhood. Indeed, there are multiple other relatively massive bodies — Vesta, Pallas, and Hygiea — also in Ceres' vicinity.

Pluto had held this distinction of planet for 76 years, starting with its discovery in 1930 by American astronomer Clyde W. Tombaugh. The demotion of Pluto to dwarf planet remains controversial, and not just among lay astronomers. A team of American scientists published a paper in December in the scientific journal Icarus arguing that a "planet" should be defined as any geologically active celestial body. One co-author argued that we should say there are "probably over 150 planets in our solar system"; the paper claimed that the need to distinguish planets from moons is cultural, not scientific, and hinders proper understanding of astronomy.

"We found that during the 1800s the non-scientific public in the Latin west developed its own folk taxonomy about planets reflecting the concerns of astrology and theology, and that this folk taxonomy eventually affected the scientists," the scientists explained. They later concluded that "using the geophysical planet concept with subcategories for the individual features (including gravitational dominance) makes the planet concept both useful and deeply insightful for communicating with the public." This did not happen in 2006, they assert, because "because adequate time was not taken to sort these issues," with the resulting vote leading to "a deeper split in the community."

Ironically, even as Pluto was being demoted, Ceres nearly received a promotion. An earlier 21st century proposal for defining a planet would have done so by describing a planet as having enough mass to be nearly round and to orbit around a star without being a satellite of a planet or a star itself. Had this definition been accepted, Ceres would have become the fifth planet from the Sun.

Bizarre UFO documents declassified by Pentagon

For five years, the U.S. Department of Defense ran a program that monitored reports of human encounters with UFOs (unidentified flying objects). Now the release of more than 1,500 pages of documents reveals that the agency compiled bizarre stories of unaccounted pregnancies, radiation burns and even brain damage during a secretive stretch from 2007 to 2012.

This article originally appeared at Salon.

First published in the British tabloid The Sun in compliance with a FOIA (Freedom of Information Act) request, the collection of documents was originally created by AATIP (Advanced Aerospace Threat Identification Program). Its existence only became known to the public after former program director Luis Elizondo resigned from the Pentagon in 2017 and released videos of unidentified, fast-moving aircraft. While the U.S. government withheld some of the requested documents in the new release, claiming there were privacy and confidentiality concerns, the materials that were produced are bound to fuel rampant speculation among the large community of UFO conspiracy theorists.

The Pentagon documents state that people who observed unidentified flying objects frequently displayed a cluster of similar physical symptoms: Injuries consistent with exposure to electromagnetic radiation (such as burns), heart ailments, and sleep disturbances. A report speculates that these could be caused by "energy related propulsion systems" and warns that the underlying technology could pose a "threat to United States interests." Additionally, in cases that would not seem out of place in an "X-Files" episode, there were accounts of "apparent abduction" and "unaccounted for pregnancy."

Another document from the cache contains a rubric for categorizing different types of seemingly paranormal experiences. If a person claims to have observed a UFO that had extraterrestrials on board, for instance, they are categorized as "CE3." By contrast, someone who says they encountered "ghosts, yetis, spirits, elves and other mythical/legendary entities" is classified as "AN3."

Other documents describe efforts to communicate with extraterrestrial civilizations, plans for exploring and colonizing deep space, and studying ways to pioneer technology like mind-controlled robots and invisibility cloaks.

The documents also reveal that former Sen. Harry Reid, a Nevada Democrat who served as Senate Majority Leader from 2007 to 2015, fought to learn more about UFO technology that he believed had been acquired by government contractors. One document shows Reid requesting a "restricted special access program" for work being conducted by BLASS (Bigelow Aerospace Advanced Space Studies), which had been awarded a $12 million contract to study "advanced aerospace weapon threats from the present out to 40 years in the future." Although Reid pointed out how BLASS had identified "several highly sensitive, unconventional aerospace technologies" which required "extraordinary protection," he was not allowed to conduct the level of investigation that he wanted.

This is not the first time that the public has been made aware of Reid's concerns about UFOs. (Reid passed away in December from pancreatic cancer.) Last year, a lengthy report in The New Yorker revealed that Reid suspected Lockheed Martin, the American aerospace firm, had recovered fragments from a UFO that had crashed in the United States.

"I was told for decades that Lockheed had some of these retrieved materials," he told the magazine at the time. "And I tried to get, as I recall, a classified approval by the Pentagon to have me go look at the stuff. They would not approve that. I don't know what all the numbers were, what kind of classification it was, but they would not give that to me."

Reid also suggested in 2020 that the government knows more about UFOs than has been released to the public, tweeting satisfaction that the Pentagon had allowed the release of footage shot by the U.S. Navy in 2004 and 2015 of "unidentified aerial phenomena." Reid stated that although he was happy with the release of the footage, "it only scratches the surface of research and materials available. The U.S. needs to take a serious, scientific look at this and any potential national security implications. The American people deserve to be informed."

Publicly, serious scientists have no credible evidence that intelligent extraterrestrial life has landed on Earth or made contact with humans. There are, however, occasional space anomalies observed by astronomers that credible scientists believe may hint at extraterrestrial intelligence, or at least warrant further study. The most prominent of these is the passing interstellar object 'Oumuamua, which came from elsewhere in the galaxy and blazed through the solar system in 2017. Avi Loeb, a Harvard astronomy professor, believes that object had many of the signatures we might associate with intelligent life, and may have been some kind of probe constructed by an extraterrestrial civilization.

Meet the scientist who wants to control the weather

It is easy to forget that clouds — yes, those big, cottonball-resembling things in the sky — are comprised of thousands of tiny particles, so small that they float through the air instead of settling on the ground. These are known as aerosols, and for scientists like geoengineering expert Dr. Hannele Korhonen — who has ambitions to control the weather — they are a lifelong passion.

That passion is at the heart of "How to Kill a Cloud," a new documentary that premieres on VICE's The Short List on Thursday, April 7th on The title is apt both literally and for figurative reasons: It chronicles Korhonen's doomed pie-in-the-sky dream — which takes her from her native Finland to the United Arab Emirates (UAE) on a $1.5 million grant — to make it possible for people to create rainclouds in the desert. As Korhonen hobnobs with one percenters and wrestles with the ethical implications of manipulating the weather, "How to Kill a Cloud" uses a light touch to cut between Korhonen's obviously sincere and brilliant fascination with science and the grubby networking that is required as she pursues her dream.

The ambition to control the weather sounds like the domain of mad scientists — or at least, of the very rich countries that can afford such luxury technology. Yet such ambitions also inevitably lead to thorny ethical and political drama: in China, for instance, where clouds are routinely "seeded" with silver iodide or liquid nitrogen to stimulate snow or rainfall, intense debates preside over who has access to which cloud's water, and when and where such cloud-seeding is appropriate.

"I think there is a larger question as it pertains to geopolitics as well, in terms of the wealthy countries being in the position of being able to do this and bringing the top scientific minds from around the world in order to do this research and who actually gets to own this research is the UAE in the end," docuseries curator Suroosh Alvi told Salon. "And if they have issues with Iran or Qatar across the Gulf, will they be able to flood them out? I think it is very, very political and gets us into the ethical kind of issues around it."

Many scientists think the technology being developed could be used to help slow down or solve the problems associated with climate change.

Yet while it seems frightening for people to be able to control the weather, greenhouse gas emissions have already begun changing the atmosphere through the process known as climate change. In a sense, humanity is already past the point of no return when it comes to altering the weather. The only difference is that, if Korhonen's visions is realized, humans will have the technology to do so deliberately.

"The difference with weather modification is that you make science where you try to find means [to] control it the way you want it — and when you want it," director Tuija Halttunen explained to Salon. "The contradiction is that many of the scientists who I met during film are very concerned about climate change." Many scientists think the technology being developed could be used to help slow down or solve the problems associated with climate change, although "they don't want to do that" necessarily, Halttunen said, because that would merely be a temporary fix: "The final answer is any way to cut down the emissions and not to find means for how to keep up this [wasteful] way of life."

Yet "How to Kill a Cloud" is not all about ethical debates and the future of humanity. There is also plenty for the science buffs out there, as Halttunen and Korhonen capture beautiful shots of clouds and break down how people can understand them not as ephemeral dreams from the sky, but as real-life objects.

"There wouldn't be any clouds if there weren't impurities in the atmosphere and in the air," Halttunen told Salon when asked about the filmmakers' fascination with clouds. "In a sense, the air must be dirty to get clouds, because the water gets on the surface on the particles." The impurities mean that rain droplets aren't pure liquid water, but contain impurities too. "There is always the impurity in the air," Halttunen continued.

Perhaps this is an appropriate metaphor for the dilemma captured in "How to Kill a Cloud" — the purity of Korhonen's fascination with science and stated desire to help humanity, and the imperfections of the tiny particles of capitalism and geopolitics that get mixed in with those ambitions. Within this milieu, the documentary serves as something of an effort to create good — both by shedding light on the sausage-making side of science and by providing readers with some entertaining and legitimate science to help the medicine go down more smoothly.

"By putting this out there and pushing it as far as we can and as wide as we can, that's one way to help mitigate [misinformation]," Alvi told Salon when asked if the documentary could be an antidote to the bad science circulating online. "I think the director, she felt that the protagonist Hannele did kind of ignore the politics of it all in order to be an ambitious scientist. Maybe that's okay if she is a scientist and she's doing good work, but we don't know if it'll get weaponized down the road or not."

How nuclear weapon safeguards work — or fail

During the nadir of the Watergate scandal in the 1970s, President Richard Nixon hit the bottle hard — to the point that the American security establishment was scared that he might drunkenly cause an international incident. Given that Nixon had control over the American nuclear arsenal, this was, to say the least, a sobering prospect. Yet thankfully for humanity (if not as much for the integrity of American democracy), the military figured out a way to circumvent the elected commander in chief.

This article first appeared on Salon.

"If you go back to the Nixon era, right toward the end during the Watergate period, when Nixon was drinking heavily and had become erratic, the secretary of defense at that time was Jim Schlesinger, an extraordinarily bright man and very principled," David Gergen, a veteran political operative whose career traces back to Nixon's administration, told Salon in 2017. "And he told the joint chiefs, if you get an order from the president to fire a nuclear missile, you do not do that. Don't take an order from the commander in chief until you call me and I give you personal approval, or you get the personal approval of the secretary of state."

The Nixon incident seems particularly relevant in 2022, as Russian President Vladimir Putin's botched invasion of Ukraine is again raising the prospect that one world leader's personality quirks will result in a nuclear war. Yet there are mechanisms in place, figuratively and literally, to protect the human species from the capriciousness of the handful of people who get to control nuclear weapons. Some of those controls are human — people, often military brass, that provide some friction in-between a politician and the big red button. In other cases, the controls are technologic — electrical controls designed to stop an accident or a rogue incident.

Bombs are designed to be idiot-proof (though idiots are unpredictable)

According to the nonprofit global security organization The Nuclear Threat Initiative, the way to assess if a nuclear weapon is "safe" is to determine if it will "produce yield," which is the scientific term for a nuclear explosion. They also differentiate between an accidental detonation of that nature, and a lesser accident that merely spreads nuclear material around. While the latter event is still terrible (the Chernobyl nuclear incident in 1986, though involving a plant and not a weapon, is a good example of why), it is the former that most frightens both experts and the public — and has prompted a number of key design decisions in the devices themselves.

The gold standard is something known as the Walske Criteria, named after nuclear expert Carl Walske. He determined that every weapon must be designed so that, from a strictly mathematical perspective, there is only a one-in-a-billion chance that it could produce yield in a routine situation (such as while it is sitting in a silo) and only a one-in-a-million chance of it doing so during a freak occurrence like being dropped or an explosion going off near it. Every weapon contains layered components and numerous systems that have to all be activated in a precise order before it can detonate.

One crucial concept, attributable to Los Alamos physicist Harold Agnew, is known as "one-point safety." Agnew recognized that human beings who interact with weapons every day will eventually develop a casual approach toward handling them, making small accidents inevitable. Each weapon would need to be able to withstand the impact of, say, being fumbled onto a tarmac; this principle was eventually broader to cover being mistakenly dropped from a large height. According to a 1987 report by Los Alamos weapons designers Robert Thorn and Donald Westervelt, "it thus became a major design objective to assure that even when fissile and high-explosive components were fully assembled, there would be no nuclear yield if an accident resulted in detonation of the high explosive. Since such a detonation might start at any single point on or in the explosive components, this design objective came to be known as 'one-point safety.'"

Of course, this does not account for Russia's nuclear arsenal, nor those of other nuclear countries like China, France, India, Israel, North Korea, Pakistan and the United Kingdom. It does not account for how America's nuclear arsenal is aging, and as such is in constant risk of experiencing potentially dangerous mechanical failures due to simple neglect. In addition, it does not consider the situation with Nixon — where the "mistake" is not mechanical or human error, but rather a human bad actor.

It takes a village (of bureaucrats and politicians) to set off a nuclear bomb

In theory, no rational person would intentionally implement a policy that would result in the end of the world — and, by extension, their own death. This hypothesis has informed foreign policy since America dropped nuclear weapons on Japan in 1945 to end World War II. As Dr. Jasen Castillo from the George H.W. Bush School of Government at Texas A&M University told Salon earlier this month, "there are very few cases where people pursue goals other than self-preservation — or in other words, where self-preservation is not the primary goal."

Yet he also acknowledged exceptions such as Adolf Hitler vowing to "fight to the death" near the end of World War II. Fortunately for humanity, Hitler did not have nuclear weapons — but what of the Nixons?

Want more health and science stories in your inbox? Subscribe to Salon's weekly newsletter The Vulgar Scientist.

The unsettling answer is that, to the extent that countries are transparent about their procedures for using nuclear weapons, it seems that the power is concentrated in the hands of the executive. This means that, for Russia, Putin alone has discretion over whether to use those weapons. That power is held in the Cheget, a small briefcase that Putin keeps close to himself at all times. It provides him with complete and undisputed command over Russia's entire strategic nuclear arsenal, and allows Putin to immediately transmit orders to the appropriate military personnel. Putin is also keeping himself physically remote from all but a handful of people, rarely meeting with people face-to-face and when doing so almost always maintaining significant distance. It remains unclear how these details impact the processes for his use of nuclear weapons.

In the United States, however, things are not necessarily much better. During Trump's presidency, Gergen told Salon in 2017, "I've asked people in the Defense Department, 'Do you think there's a similar arrangement today between [Secretary of Defense Jim] Mattis and the four-star generals?' And the answer they've given me back — I don't think there's any reason to believe he's giving such an order ... [is] that if they're given an order that they think comes from an erratic personality, they will double-check it with the secretary before they carry it out." In both the Nixon and Trump situations — as well as those for any other hypothetical presidents who may be unfit to control America's nuclear arsenal — it seems that unelected military officials adopted a "fly by the seat of their pants" approach to contain potentially volatile situations.

Things got hairy a few times

A "Broken Arrow," in this context, does not refer to a damaged street sign, but one of the dozens of occasions since 1950 when there was a dangerous nuclear weapon accident. One of the most infamous is the so-called Damascus Incident, which occurred in 1980 in an Arkansas town of the same name. While performing routine maintenance, an Air Force repairman accidentally dropped a heavy wrench socket, which landed at the bottom of the silo after bouncing off a nuclear missile and striking a pressurized fuel tank. The entire area was evacuated and, more than eight hours later, an explosion killed one person and injured 21 others.

Sometimes the incidents drag in innocent countries. In 1966, a B-52 bomber carrying four hydrogen bombs was flying over Spain as part of America's policy to always have a first strike capability over the Soviet Union in the event of a "hot" war. During a routine refueling, the bomber accidentally crashed into a KC-135 tanker. The entire unarmed nuclear payload was released; three of the hydrogen bombs landed on the ground while the fourth was dropped into the Mediterranean Sea. (Seven military personnel were also killed in the incident.)

Then there are other types of close calls. Many are false alarms — some official being incorrectly told that a major nuclear retaliation might be warranted based on faulty information usually due to human or technical errors. But sometimes things get even hairier. In 1961, a B-52 bomber experienced a mechanical failure and in the process accidentally dropped and almost detonated a hydrogen bomb in North Carolina. As journalist Eric Schlosser told NPR in 2014, "One of those hydrogen bombs went through all of its proper arming steps except for one, and when it hit the ground in North Carolina, there was a firing signal sent. And if that one switch in the bomb had been switched, it would've detonated a full-scale — an enormous, enormous thermonuclear explosion — in North Carolina."

When Nixon meddled in an overseas war to win an election: Does this sound familiar?

Hubert Humphrey can fairly be described as the Joe Biden of his time, but with one key difference: The Republican candidate who sabotaged U.S. foreign policy and meddled in an overseas conflict in an effort to win his presidential election wound up, well, actually winning it.

I'm talking about Richard Nixon, who tried to scuttle peace negotiations in Vietnam in 1968, the year he was elected president. I probably don't need to tell you who ran against Joe Biden and lost in 2020, despite his meritless protests to the contrary —and despite his grotesque meddling in Ukraine.

But let's get back to Nixon and Humphrey. President Lyndon Johnson originally intended to run again in 1968. (Because he served less than half of John F. Kennedy's original term after the latter's assassination, Johnson was in the unique position of being eligible to serve more than eight years as president.) But the Vietnam War had become so unpopular that Johnson was in serious danger of losing the Democratic nomination, and he dropped out during the primaries. After that, the Democratic contest boiled down to a three-way fight between Humphrey — who, as Johnson's vice president, had largely supported the war — Sen. Eugene McCarthy of Minnesota, an antiwar liberal, and Sen. Robert F. Kennedy of New York, JFK's younger brother.

As you probably know, that was one of the most tumultuous and tragic years in American history. Kennedy would likely have won the nomination if he hadn't been assassinated in Los Angeles in June, just two months after Martin Luther King Jr.'s assassination in Memphis. Humphrey didn't even compete in the primaries, but emerged as the nominee after the infamously violent Democratic convention in Chicago. Alabama Gov. George Wallace ran as a third-party candidate, siphoning off Southern white votes that would otherwise have gone to the Democrats. (Wallace carried five states in the general election; no third-party candidate since then has won any.) In the face of all this turmoil, Humphrey needed massive support from Democrats — even liberals and leftists who didn't much like him — just to keep Nixon from winning in a blowout.

In fact, Humphrey nearly turned things around, and the eventual election result was closer than many expected. If he had actually won, Sept. 30, 1968 might be remembered as a turning point. Up till then, Humphrey lagged far behind Nixon in polls, largely because he was tied to Johnson's unpopular war. It's still not clear whether Humphrey personally agreed with LBJ's Vietnam policy, but he was a loyal soldier who had never expressed doubts in public. That all changed in a televised speech on Sept. 30, when Humphrey promised that if elected he would halt the bombing of North Vietnam and call for an immediate ceasefire. That served to unite most liberals behind him (although certainly not all), especially given Nixon's refusal to disclose any details about his alleged peace plan. Nixon's explanation for this, it must be said, was proto-Trumpian: He argued that unpredictability was a virtue in a president, and he didn't want the North Vietnamese to gain any advantage by making his plans known in advance.

Humphrey saw a major bounce in his poll numbers, and a historic comeback victory suddenly seemed possible. Nixon, one might imagine, was having flashbacks to his controversial loss to JFK in 1960, one of the closest elections in American history.

Why do I seek to compare Humphrey with Joe Biden? Both men had long careers as powerful senators before becoming vice president, and while the circumstances were entirely different, both were overshadowed by charismatic presidents who welcomed the spotlight. Both were viewed with considerable mistrust by liberals and progressives when they ran for president in their own right (although, in fairness, Humphrey was a genuine liberal with a long record of supporting civil rights and the labor movement, whereas Biden was a lifelong moderate with a decidedly mixed political record). Still, both also benefited from those associations: Humphrey was vice president during a period of ambitious social legislation, and Biden leaned heavily into Barack Obama's popularity.

And then there's the fact that Humphrey and Biden both faced unscrupulous political operators and considerable campaign skulduggery. I hardly need to spell out that Biden ran against an incumbent president who tried to coerce a foreign government into launching a phony investigation of Biden and his son, an episode that led all the way to a presidential impeachment and trial in the Senate. Humphrey's situation was similar but different: His opponent also tried to influence overseas events, in that case by sabotaging peace negotiations to damage Humphrey's chances of winning.

Nixon, already a master of political dirty tricks — he wasn't called "Tricky Dick" for nothing — had an ace up his sleeve long before Humphrey delivered his eloquent speech in late September. Through a Chinese-born Republican fundraiser named Anna Chennault, the widow of a prominent World War II general, Nixon had a back channel to the government of South Vietnam, which was effectively a U.S. puppet state. Chennault and others, acting on the Nixon campaign's behalf, urged the South Vietnamese government to boycott peace talks with the Johnson administration. Their argument, before and after Humphrey's big speech, was that Nixon was likely to win the election and would offer the South Vietnamese regime a better deal.

Johnson, as it happened, knew all about this and viewed Nixon's actions as "treason" — but had no obvious way to expose Nixon without revealing that the administration had been spying on a political opponent. The best LBJ could do was to try to give Humphrey a boost — despite rising tension between them — by announcing a halt in the bombing of North Vietnam on Oct. 31, just days before the presidential election.

That was too little, too late: Nixon won the election and all hope of Vietnam peace talks in the near term collapsed. The war continued for several more years, until the U.S. military finally withdrew in 1973 and South Vietnam collapsed in 1975. There is no way to know how many people died because of the thwarted negotiations, but at the time Johnson estimated that South Vietnam was "killing four or five hundred every day waiting on Nixon."

We're not talking about accusations or allegations here. All of that has been extensively documented through papers and tapes, personal accounts and government records. Rumors appeared in the press at the time and the whole affair was an open secret in Washington. But it remains little known today, which can also be said about Donald Trump's threats to withhold $391 million in military aid from Ukraine, already allocated by Congress, unless the Kyiv government announced a spurious criminal investigation into Hunter Biden's business dealings. As we now know all too well, the threat that Russia posed to Ukraine's sovereignty was genuine.

Nixon's unpunished chicanery meant that a misbegotten foreign war dragged on for years, costing thousands of lives. There is no visible direct connection between Trump's attempted extortion of Ukrainian President Volodymyr Zelenskyy and Vladimir Putin's decision to invade Ukraine three years later. But those events are linked, at the very least, through Trump's curious relationship with Putin, whom he continued to praise right up to the day of the invasion. How future historians may view the period leading up to the Ukraine conflict is of course impossible to say. Given the climate of increasing global tension and the rising danger of nuclear war, let's hope we're still here to find out.

Covid-positive deer may be harboring the virus and infecting humans, study says

Aside from saving human lives in the immediate moment, the other fundamental reason that public health officials were pushing mass vaccination to slow the spread of COVID-19 is because the more hosts in which a virus resides, the more likely the virus is to eventually mutate into something more virulent. Obviously, that has happened at least twice so far with SARS-CoV-2: first with the ultra-contagious delta variant, and then later with the even more contagious omicron variant.

This article first appeared on Salon.

Currently, the number of human hosts in the U.S. is waning as the omicron wave falls from its peak. If we are lucky, that may imply that this wave of infections is over, and while the coronavirus will continue to circulate (and mutate) as it becomes endemic, it would have fewer hosts in which to do so.

Or, at least human hosts. As we know, SARS-CoV-2 seems to have circulated in bats and pangolins before crossing over to humans. We also know that the virus spread back into animals, presumably through humans: dogs, cats, a zoo lion, and a large population of deer appear to have been infected by humans.

Ominously, the infection trend may now be going the other way. A recent Canadian study raises the possibility that deer — one of the most ubiquitous large mammals in North America — may have infected humans with COVID-19, the disease caused by SARS-CoV-2. That would imply the virus circulated for a while in deer, reproducing and occasionally mutating on its way, before jumping back into people.

RELATED: From deer and dogs to rats and mink, COVID-19 has spread to the animal world

The new study provides evidence that deer may have infected humans, although it is not definitively proven. Conducted by more than two dozen scientists across Ontario and posted on the database bioRxiv (it has not yet been peer reviewed), the study included 300 samples from white-tailed deer in Canada during the final months of 2021. Seventeen of those deer tested positive for SARS-CoV-2, all of them from southwestern Ontario. The scientists discovered that this same strain of SARS-CoV-2, which is highly divergent from other known strains, was also highly similar to a SARS-CoV-2 virus that had infected a human. (It was also closely related to a strain found among humans in Michigan in late 2020.) While the scientists cannot confirm that the virus had been transmitted to the human by a deer, they know that the human lived in the same geographic area as deer and had been in close contact with deer during the same time when the infected samples were collected.

Want more health and science stories in your inbox? Subscribe to Salon's weekly newsletter The Vulgar Scientist.

That said, the sample size is very small and no one has definitively proved that the deer gave the virus to the human. There is also no evidence that the person with the mutant SARS-CoV-2 virus passed it on to anyone else, and initial experiments suggests the new virus would not be able to evade antibodies. In other words, if it did spread among people, individuals who are vaccinated would likely be safe.

Finally, because the deer-based SARS-CoV-2 virus is such an unknown, there is no reason to believe yet that it presents any kind of increased risk to humans. The bigger concern is that, because viruses can evolve in animals, there is the possibility that it could turn into something more dangerous.

"The virus is evolving in deer and diverging in deer away from what we are clearly seeing evolving in humans," Samira Mubareka, a virologist at Sunnybrook Research Institute and the University of Toronto and an author of the new paper, told The New York Times. After fully sequencing the genomes from five of the infected deer, the scientists discovered many mutations that had not been previously documented. They also found 76 mutations that set the new version of SARS-CoV-2 from the original version of the virus. Some of those mutations had been previously discovered in other infected animals like mink.

Shortly before this study was published, a separate group of scientists announced that Pennsylvania deer may have continued to be infected with the Alpha variant even after it disappeared in humans — and that it evolved within them as they continued to spread it. This further reinforces the concern about deer incubating SARS-CoV-2 viruses.

The SARS-CoV-2 virus is believed to have originated in in a horseshoe bat. At some point, the virus is thought to have been transmitted to another animal through one or many "spillover events," and then eventually found its way to a human host. Bats are notorious for serving as hosts to dangerous coronaviruses because their immune systems are unusually aggressive. This means that viruses which live in bats need to evolve and replicate more quickly in order to survive.

"The bottom line is that bats are potentially special when it comes to hosting viruses," Mike Boots, a disease ecologist and UC Berkeley professor of integrative biology, told Science Daily in 2020. "It is not random that a lot of these viruses are coming from bats. Bats are not even that closely related to us, so we would not expect them to host many human viruses. But this work demonstrates how bat immune systems could drive the virulence that overcomes this."

Dogs can get a canine form of dementia — and it is very similar to the human version

If you have ever been close with a dog, the chances are that you have wondered what your canine companion might be thinking. As time goes on and your relationship grows — whether as a primary owner, a family member or an occasional visitor — you will probably ask yourself if the dog remembers you. Like our human friends and family, we would like to think that, even if we are not in the room, dogs still think about us.

Scientists agree dogs are intelligent, emotional and capable of forming lasting relationships with humans. While there is robust debate about the extent to which this is true, animals like Bunny the "talking" sheepadoodle are able to communicate in such a sophisticated manner that they will even discuss their dreams.

The bad news is that, just like humans, dogs can develop degenerative nerve diseases which damage their minds. One illness in particular has a direct analogue in dogs: Alzheimer's disease. Dogs, sadly, can develop a similar condition — and tragically, that might mean that your dog could suffer some of the same sad Alzheimer's-like conditions, such as forgetting its close family, in its final days.

"Canine Cognitive Dysfunction [CCD] mirrors two key components of Alzheimer's disease in humans," Dr. Silvan Urfer of the Dog Aging Project and the University of Washington told Salon by email. It comes down to a pair of amino acids that will suddenly accumulate in your brain: Amyloid-beta 42 and hyperphosphorylated Tau (pTau). "While there are likely a few differences regarding the details of pTau pathology in particular, it is fair to say that CCD is the dog analog of Alzheimer's Disease," Urfer noted.

Dr. Elizabeth Head, a professor in the Department of Pathology & Laboratory Medicine at the University of California, Irvine, told Salon in writing that in addition to developing these beta-amyloid plaques — one of the hallmark features of Alzheimer disease — the dogs also suffer like humans, in that neurons die. The synapses, or connections between neurons, are lost, and these are observed in humans who as they age suffer from cognitive decline.

"From a psychological perspective dogs may show signs of disrupted sleep patterns (e.g. up pacing at night), more vocalizing, [being unable to] remember how to signal to go out and may have trouble recognizing family members," Head explained. "This can lead to more anxiety. From a physical perspective, there may be more episodes of incontinence but oftentimes other physical problems are ruled out with the CCD diagnosis (e.g. deafness, blindness, systemic illness)."

Indeed, the similarities between CCD and human dementia are so striking that researchers believe man's best friend could actually help him find a cure for the debilitating ailment. There is a nationwide study known as The Dog Aging Project — which was launched by Cornell University, the University of Washington and the University of Arizona and funded by the National Institute on Aging — which exists precisely because scientists are intrigued by those similarities. They believe that learning more about how to help dogs with the condition can, in the process, provide research data that helps fight human diseases related to senescence.

"What we're trying to do is find a better understanding of the disease in dogs and translate those findings to humans," Dr. Marta Castelhano, director of the Cornell Veterinary Biobank and one of the involved scientists, told Cornell News at the time.

Until a cure for CCD exists, the sad reality is that dogs and humans alike who experience cognitive decline will be left to manage their symptoms to the best of their ability. When speaking with Salon, Urfer stressed that he is "not providing veterinary advice on individual dogs, as there is no vet-patient-client relationship here." People who are concerned about their dogs should consult a veterinarian. What we do know for sure, however, is that causal treatments do not exist for CCD. All we know is that there are certain physical characteristics that make dogs more or less likely to be at risk.

"We know that bigger dogs have a lower risk of developing CCD than small dogs, and there is also some evidence that intact males have a lower CCD risk than neutered males, and that existing CCD progresses faster in neutered than in intact males," Urfer explained. "This is interesting in that it also mirrors findings from human medicine that taller people are less likely to get Alzheimer's disease, and that men who undergo anti-androgen treatment for prostate cancer have an increased Alzheimer's disease risk."

If your dog is healthy now, then the best thing to do is make sure they stay healthy. That can prevent CCD from developing. It is the exact same as the approach for homo sapiens.

"The best approach is always prevention – ensure good physical health (e.g. keep up with dentals), exercise, lots of social and cognitive enrichment, and a good diet, manage co-occuring conditions (e.g. obesity) – just like for people!" Head told Salon.

This is what would happen to Earth if a nuclear war broke out between the West and Russia

Suddenly, the threat of nuclear war feels closer than it has in decades. The Bulletin of Atomic Scientists updated their Doomsday Clock to 100 seconds to midnight, and President Joe Biden has issued increasingly ominous statements reflecting how the looming conflict over the Ukraine that could ensnare both Russia and the west into conventional war.

This article first appeared on Salon.

And, some fear, war with nuclear weapons. It is a prospect that has haunted human beings since the dawn of the Cold War. Politicians who were perceived as too open to the idea of nuclear war would pay for their hawkishness at the polls. Motion pictures from "Dr. Strangelove" to "The Day After" have depicted an uninhabitable world, filled with lethal amounts of radiation and short on necessities like food and water. As our electrical infrastructure collapsed around us, people would resort to looting and other violent methods to survive. The seeming deterioration of civilization during the early months of the COVID-19 pandemic would be nothing compared to the anarchy and destruction that would follow nuclear war.

Yet decades of living with nuclear weapons have produced a broad body of knowledge as to what a nuclear war might do to the planet, and to humanity. If even a "small" nuclear war were to break out, tens of millions of people would die after the initial blasts. A blanket of soot would wrap the rays of the Sun and cause a nuclear winter, destroying crops all over the planet and plunging billions into famine. In the northern hemisphere, there would be such severe ozone depletion from the nuclear smoke that organisms would suffer from increased exposure to damaging ultraviolet light. While things would not be as bad in the southern hemisphere, even well-positioned countries like Australia would face the ripple effects from a small nuclear war in the northern hemisphere by sheer virtue of its interconnectedness with the global community.

RELATED: Fine-tuning the doomsday machines: Understanding the nuclear-missile dispute

"The worst-case scenario is that US and Russian central strategic forces would be launched with the detonation of several thousand warheads," Hans M. Kristensen, Director, Nuclear Information Project and Associate Senior Fellow to SIPRI, Federation of American Scientists, told Salon by email. "A large nuclear exchange would not only kill millions of people and contaminate wast areas with radioactive fallout but potentially also have longer-term climatic effects."

Yet Kristensen said he does not believe the current Ukraine conflict is likely to become a nuclear war. He is not the only nuclear weapons expert who feels that way.

"First, there is little chance of that happening barring some massive miscalculation, accident or escalation of any conflict there," Geoff Wilson, the political director of Council for a Livable World, a non-profit dedicated to eliminating nuclear weapons from America's arsenal, told Salon by email. Ukraine is not part of the North Atlantic Treaty Organization (NATO) and, as such, the United States has not committed to use its military if Ukraine's sovereignty is encroached. While American policymakers can provide material aid and punish Russia through sanctions, it is unlikely that they will risk open warfare.

Want more health and science stories in your inbox? Subscribe to Salon's weekly newsletter The Vulgar Scientist.

That said, the world's nuclear powers (which, in addition to the United States and Russia, also includes China, India, Israel, France, North Korea, Pakistan and the United Kingdom) still have vast arsenals at their disposal. In addition, President Donald Trump has overseen the development of new weapons like the W76-2 low-yield nuclear warheads. As such, the possibility of nuclear war always remains — not likely in this scenario, perhaps, but never entirely out of the question.

"The fact that the United States has started to develop these weapons again is crazy, and it sends a very poor message to the rest of the world when we have been pushing nations to end nuclear proliferation and reduce the size and scope of nuclear arsenals for so long," Wilson explained. "What's more, it sends a dangerous signal to our adversaries that we think that tactical nuclear weapons are important again, and will likely signal to them that they should follow suit."

Like Kristensen, Wilson made it clear that if conventional war with nuclear weapons ever did break out, it would end disastrously.

"Researchers have estimated that a 'regional nuclear war,' say, a couple hundred low-yield weapons exchanged between India and Pakistan, could lead to the deaths of billions people worldwide, due to the effects on global food production," Wilson explained. "So, yeah, it would not be good."

Since the United States dropped an atomic bomb on Hiroshoma in 1945, intellectuals from a number of disciplines have advocated for world government as an alternative to a possible nuclear holocaust. Andreas Bummel, co-founder and director of the international campaign for a United Nations Parliamentary Assembly and of Democracy Without Borders, has made that argument as well, telling Salon that there are no national policies which can entirely eliminate the threat.

"The only way is institutional and structural by creating a workable international system of collective security which is not only based on total elimination of WMD but also radical conventional disarmament, setting up UN capacities for rapid intervention and democratic decision-making bodies and procedures," Bummel explained by email. He added that it is "doubtful" whether this can happen in a meaningful way "while major nuclear powers are autocratic and one-party dictatorships."

Kristensen offered some less sweeping alternatives.

"Arms control agreements to reduce the numbers and role of nuclear weapons," Kristensen told Salon. "Crisis management agreements to reduce the chances for and risks of misunderstandings and overreactions. And changes in national policies so countries refrain from taking aggressive action. All of this requires political will to change."