Ramadan: 6 questions answered

1. Why is Ramadan called Ramadan?

Ramadan is the ninth month of the Islamic lunar calendar, and lasts either 29 or 30 days, depending on when the new crescent moon is, or should be, visible.

The Arabic term Ramadan connotes intense heat. It seems that in pre-Islamic Arabia, Ramadan was the name of a scorching hot summer month. In the Islamic calendar, however, the timing of Ramadan varies from year to year. This year Ramadan begins in most places on April 13. An Islamic year is roughly 11 days shorter than a Gregorian year.

2. What is the significance of Ramadan?

Ramadan is a period of fasting and spiritual growth, and is one of the five “pillars of Islam." The others being the declaration of faith, daily prayer, alms-giving, and the pilgrimage to Mecca. Able-bodied Muslims are expected to abstain from eating, drinking and sexual relations from dawn to sunset each day of the month. Many practicing Muslims also perform additional prayers, especially at night, and attempt to recite the entire Qur'an. The prevailing belief among Muslims is that it was in the final 10 nights of Ramadan that the Qur'an was first revealed to the Prophet Muhammad.

3. What is the connection between soul and body that the observance of Ramadan seeks to explain?

The Qur'an states that fasting was prescribed for believers so that they may be conscious of God. By abstaining from things that people tend to take for granted (such as water), it is believed, one may be moved to reflect on the purpose of life and grow closer to the creator and sustainer of all existence. As such, engaging in wrongdoing effectively undermines the fast. Many Muslims also maintain that fasting allows them to get a feeling of poverty, and this may foster feelings of empathy.

4. Can Muslims skip fasting under certain conditions? If so, do they make up missed days?

All those who are physically limited (for example, because of an illness or old age) are exempt from the obligation to fast; the same is true for anyone who is traveling. Those who are able to do so are expected to make up the missed days at a later time. One could potentially make up all of the missed days in the month immediately following Ramadan, the month of Shawwal. Those unable to fast at all (if they are financially able) are expected to provide meals to the needy as an alternative course of action.

5. What is the significance of 29 or 30 days of fasting?

By fasting over an extended period of time, practicing Muslims aim to foster certain attitudes and values that they would be able to cultivate over the course of an entire year. Ramadan is often likened to a spiritual training camp.

Besides experiencing feelings of hunger and thirst, believers often have to deal with fatigue because of late-night prayers and predawn meals. This is especially true during the final 10 nights of the month. In addition to being the period in which the Qur'an was believed to have been first revealed, this is a time when divine rewards are believed to be multiplied. Many Muslims will offer additional prayers during this period.

6. Do Muslims celebrate the completion of Ramadan?

The end of Ramadan marks the beginning of one of two major Islamic holidays Eid al-Fitr, the “festival of the breaking of the fast." On this day, many Muslims attend a religious service, visit relatives and friends, and exchange gifts.

This is an updated version of an article originally published on May 22, 2017The Conversation

Mohammad Hassan Khalil, Professor of Religious Studies and Director of the Muslim Studies Program, Michigan State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The problem with 'deprogramming' QAnon followers

Recent calls to deprogram QAnon conspiracy followers are steeped in discredited notions about brainwashing. As popularly imagined, brainwashing is a coercive procedure that programs new long-term personality changes. Deprogramming, also coercive, is thought to undo brainwashing.

As a professor of religious studies who has written and taught about alternative religious movements, I believe such deprogramming conversations do little to help us understand why people adopt QAnon beliefs. A deprogramming discourse fails to understand religious recruitment and conversion and excuses those spreading QAnon beliefs from accountability.

A brief brainwashing history

Deprogramming, a method thought to reverse extreme psychological manipulation, can't be understood apart from the concept of brainwashing.

The modern concept of brainwashing has its origin in Chinese experiments with American prisoners of war during the Korean War. Coercive physical and psychological methods were employed in an attempt to plant Communist beliefs in the minds of American POWs. To determine whether brainwashing was possible, the CIA then launched its own secret mind-control program in the 1950s called MK-ULTRA.

By the late 1950s researchers were already casting doubt on brainwashing theory. The anti-American behavior of captured Americans was best explained by temporary compliance owing to torture. This is akin to false confessions made under extreme duress.

Still, books like “The Manchurian Candidate," released in 1959, and “A Clockwork Orange," released in 1962 – both of which were turned into movies and heavily featured themes of brainwashing – reinforced the concept in popular culture. To this day, the language of brainwashing and deprogramming is applied to groups holding controversial beliefs – from fundamentalist Mormons to passionate Trump supporters.

In the 1970s and 1980s, brainwashing was used to explain why people would join new religious movements like Jim Jones' Peoples Temple or the Unification Church.

Seeking guardianship of adult children in these groups, parents cited the belief that members were brainwashed to justify court-ordered conservatorship. With guardianship orders in hand, they sought help from cult deprogrammers like Ted Patrick. Deprogrammers were notorious for kidnapping, isolating and harassing adults in an effort to reverse perceived cult brainwashing.

For a time, U.S. courts accepted brainwashing testimony despite the pseudo-scientific nature of the theory. It turns out that research on coercive conversion failed to support brainwashing theory. Several professional organizations, including the American Psychological Association, have filed legal briefs against brainwashing testimony. Others argued that deprogramming practices violated civil rights.

In 1995 the coercive deprogramming method was litigated again in Scott vs. Ross. The jury awarded the plaintiff nearly US$5 million in total damages. This bankrupted the co-defending Cult Awareness Network, a popular resource at the time for those seeking deprogramming services.

'Exit counseling'

Given this tarnished history, coercive deprogramming evolved into “exit counseling." Unlike deprogramming, exit counseling is voluntary and resembles an intervention or talk therapy.

One of the most visible self-styled exit counselors is former deprogrammer Rick Alan Ross, the executive director of the Cult Education Institute and defendant in Scott v. Ross. Through frequent media appearances, people including Ross and Steve Hassan, founder of the Freedom of Mind Resource Center, continue to contribute to the mind-control and deprogramming discourse in popular culture.

These “cult-recovery experts," some of whom were involved with the old deprogramming model, are now being used for QAnon deprogramming advice. Some, like Ross and cult intervention specialist Pat Ryan, advocate for a more aggressive intervention approach. Others, like Hassan, offer a gentler approach that includes active listening.

Choice vs. coercion

Despite the pivot to exit counseling, the language of deprogramming persists. The concept of deprogramming rests on the idea that people do not choose alternative beliefs. Instead, beliefs that are deemed too deviant for mainstream culture are thought to result from coercive manipulation by nefarious entities like cult leaders. When people call for QAnon believers to be deprogrammed, they are implicitly denying that followers exercised choice in accepting QAnon beliefs.

This denies the personal agency and free will of those who became QAnon enthusiasts, and shifts the focus to the programmer. It can also relieve followers of responsibility for perpetuating QAnon beliefs.

As I suggested in an earlier article, and as evident in the QAnon influence on the Jan. 6, 2021, capital insurrection, QAnon beliefs can be dangerous. I believe those who adopt and perpetuate these beliefs ought to be held responsible for the consequences.

This isn't to say that people are not subject to social influence. However, social influence is a far cry from the systematic, mind-swiping, coercive, robotic imagery conjured up by brainwashing.

Admittedly, what we choose to believe is constrained by the types of influences we face. Those restraints emerge from our social and economic circumstances. In the age of social media, we are also constrained by algorithms that influence the media we consume. Further examination of these issues in relation to the development of QAnon would prove fruitful.

But applying a brainwashing and deprogramming discourse limits our potential to understand the grievances of the QAnon community. To suggest “they were temporarily out of their minds" relieves followers of the conspiracy of responsibility and shelters the rest of society from grappling with uncomfortable social realities.

To understand the QAnon phenomenon, I believe analysts must dig deeply into the social, economic and political factors that influence the adoption of QAnon beliefs.

[Get the best of The Conversation, every weekend. Sign up for our weekly newsletter.]The Conversation

Paul Thomas, Chair and Professor of Religious Studies, Radford University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The discovery of the lost city of 'the Dazzling Aten' will offer vital clues about domestic and urban life in Ancient Egypt

An almost 3,400-year-old industrial, royal metropolis, “the Dazzling Aten", has been found on the west bank of the Nile near the modern day city of Luxor.

Announced last week by the famed Egyptian archaeologist Dr Zahi Hawass, the find has been compared in importance to the discovery of Tutankhamen's tomb almost a century earlier.

Built by Amenhotep III and then used by his grandson Tutankhamen, the ruins of the city were an accidental discovery. In September last year, Hawass and his team were searching for a mortuary temple of Tutankhamen.

Instead, hidden under the sands for almost three and a half millennia, they found the Dazzling Aten, believed to be the largest city discovered in Egypt and, importantly, dated to the height of Egyptian civilisation. So far, Hawass' excavations have unearthed rooms filled with tools and objects of daily life such as pottery and jewellery, a large bakery, kitchens and a cemetery.

The city also includes workshops and industrial, administrative and residential areas, as well as, to date, three palaces.

Ancient Egypt has been called the “civilisation without cities". What we know about it comes mostly from tombs and temples, whilst other great civilisations of the Bronze Age, such as Mesopotamia, are famous for their great cities.

The Dazzling Aten is extraordinary not only for its size and level of prosperity but also its excellent state of preservation, leading many to call it the “Pompeii of Ancient Egypt".

The rule of Amenhotep III was one of the wealthiest periods in Egyptian history. This city will be of immeasurable importance to the scholarship of archaeologists and Egyptologists, who for centuries have struggled with understanding the specifics of urban, domestic life in the Pharaonic period.

Foundations of urban life

I teach a university subject on the foundations of urban life, and it always comes as a surprise to my students how little we know about urbanism in ancient Egypt.

The first great cities, and with them the first great civilisations, emerged along the fertile valleys of great rivers in Mesopotamia (modern day Iraq), the Indus Valley (modern day India and Pakistan) and China at the beginning of the Bronze Age, at least 5,000 years ago.

Just like cities today, they provided public infrastructure and roads, and often access to sanitation, education, health care and welfare. Their residents specialised in particular professions, paid taxes and had to obey laws.

But the Nile did not support the urban lifestyle in the same way as the rivers of other great civilisations. It had a reliable flood pattern and thus the second longest river in the world could be easily tamed, allowing for simple methods of irrigation that did not require complex engineering and large groups of workers to maintain. This meant the population didn't necessarily need to cluster in organised cities.

An etching of the Nile flooding by French artist Jacques Callot (1592 - 1635). National Gallery of Art

Excavations of Early Dynastic (c. 3150-2680 BCE) Egyptian cities such as Nagada and Hierakonpolis have provided us with a plethora of information regarding urban life in the early Bronze Age . But they are separated from the Dazzling Aten by some 1,600 years — as long as separates us from the Huns of Attila attacking ancient Rome.

One city closer in age to the Dazzling Aten we do know a little more about is the short-lived capital of Amenhotep's III son, Akhenaten, known as the “Horizon of the Aten", or Tell el-Amarna. Amarna was functional for only 14 years (1346-1332 BCE) before being abandoned forever. It was first described by a travelling Jesuit monk in 1714 and has been excavated on and off for the last 100 years.

Very few other Egyptian cities from the Early Dynastic Period (3150 BCE) to the Hellenistic period (following Alexander the Great's conquest of Egypt in 332 BCE), have been excavated. This means that domestic urban life and urban planning have long been contentious research areas in the study of Pharaonic Egypt.

The scientific community is impatiently waiting for more information to draw comparisons between Akhenaten's city and the newly discovered capital founded by his father.

The magnificent pharaoh

Amenhotep III, also known as Amenhotep the Magnificent, ruled between 1386 and 1349 BCE and was one of the most prosperous rulers in the Egyptian history.

During his reign as the ninth pharaoh of the 18th Dynasty, Egypt achieved the height of its international power, climbing to an unprecedented level of economic prosperity and artistic splendour. His vision of greatness was immortalised in his great capital, which is believed to have been later used by at least Tutankhamen and Ay.

In 2008, for the first time in history, the majority of world's inhabitants lived in the cities. Yet, with globalisation, the differences between the “liveability" of modern cities are striking.

As a society we need to understand where cities come from, how have they formed and how they shaped the development of past urban communities to learn lessons for the future. We look forward to research and findings being published from the ancient city of Amenhotep III to enlighten us about the daily lives of ancient Egyptians at their height.The Conversation

Anna M. Kotarba-Morley, Lecturer, Archaeology, Flinders University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Trump, defying custom, hasn’t given the National Archives records of his speeches at political rallies

Public figures live on within the words they are remembered by. To understand the effect they had on history, their words need to be documented. No one is absolutely sure of exactly what Abraham Lincoln said in his most famous speech, the Gettysburg Address. Five known manuscripts exist, but all of them are slightly different. Every newspaper story from the day contains a different account.

In the case of modern presidents, for the official record, we rely upon transcriptions of all their speeches collected by the national government.

But in the case of Donald Trump, that historical record is likely to have a big gap. Almost 10% of the president's total public speeches are excluded from the official record. And that means a false picture of the Trump presidency is being created in the official record for posterity.

President George W. Bush campaigning in Knoxville, Tennessee, on October 8, 2002.

In speeches President George W. Bush would give when stumping for GOP candidates, he made the same joke 50 different times, apologizing to audiences that they drew the 'short straw' and got him instead of Laura Bush.

Paul J. Richards/AFP/Getty Images

Saving the records

In 1957, the National Historical Publications Commission, a part of the National Archives that works to “preserve, publish, and encourage the use of documentary sources … relating to the history of the United States," recommended developing a uniform system so all materials from presidencies could be archived. They did this to literally save presidential records from the flames: President Warren G. Harding's wife claimed to have burned all his records, and Robert Todd Lincoln burned all his father's war correspondence. Other presidents have had their records intentionally destroyed, such as Chester A. Arthur and Martin Van Buren.

So the government collects and retains all presidential communications, including executive orders, announcements, nominations, statements and speeches. This includes any public verbal communications by presidents, which are also placed as public documents in the Compilation of Presidential Documents.

These are part of the official record of any administration, published by the Office of the Federal Register, National Archives and Records Administration on a weekly basis by the White House press secretary. In most presidencies, the document or transcript is available a few days to a couple of weeks after any event. At the conclusion of an administration, these documents form the basis for the formal collections of the Public Papers of the President.

As a political scientist, I'm interested in where presidents give speeches. What can be learned about their priorities based on their choice of location? What do these patterns tell us about administrations?

For example, Barack Obama primarily focused on large media markets in states that strongly supported him. Trump went to supportive places as well, including small media markets like Mankato, Minnesota, where the airport was not even large enough to fly into with the regular Air Force One.

Presidential speeches often give a very different perception of an administration. Without all the pageantry, you can quickly get to the point of the visit in the text.

In speeches that President George W. Bush gave in the 2002 midterm election period, he made the same joke more than 50 times as his icebreaker. He would apologize that audiences had drawn the “short straw" and gotten him instead of Laura. His commitment to that joke gave a glimpse of his desire to try to connect to an audience through self-deprecating humor.

I found something odd when I began to pull items from the compilation and organize my own database of locations for the Donald Trump administration. I was born and raised in Louisville, Kentucky, and I pay attention to my home state. I knew that on March 20, 2017, Donald Trump held a public rally in Louisville, where in a meandering speech he touched on everything from Kentucky coal miners to the Supreme Court, China, building a border wall and “illegal immigrants" who were, he said, robbing and murdering Americans.

But when I looked in the compilation in mid-2017, I couldn't find the Louisville speech. No problem, I thought. They are just running behind and they will put it in later.

A year later, I noticed the Louisville speech was still not there. Furthermore, other speeches were missing. These were not any speeches, but just Trump's rallies. By my count, 147 separate transcripts for public speaking events are missing from Trump's official presidential speech records. That's just over 8% of his presidential speeches.

A portrait of President Chester A. Arthur, with long gray whiskers.

President Chester A. Arthur, whose family burned many of his presidential records. This was not uncommon for presidents' families to do.

Ole Peter Hansen Balling, artist; National Portrait Gallery, Smithsonian Institution

What's in, what's out

The Presidential Records Act, first passed in 1978, says administrations have to retain “any documentary materials relating to the political activities of the President or members of the President's staff, but only if such activities relate to or have a direct effect upon the carrying out of constitutional, statutory, or other official or ceremonial duties of the President."

An administration is allowed to exclude personal records that are purely private or don't have an effect on the duties of a president. All public events are included, such as quick comments on the South Lawn, short exchanges with reporters and all public speeches, radio addresses and even public telephone calls to astronauts on the space shuttles.

But Trump's large public rallies, and what he said at them, have so far been omitted from the public record his administration supplied to the Compilation of Presidential Documents. And while historians and the public could get transcripts off of publicly available videos, that still does not address the need to have a complete official collection of these statements.

Federal law says that presidents are allowed to exclude “materials directly relating to the election of a particular individual or individuals to Federal, State, or local office, which have no relation to or direct effect upon the carrying out of … duties of the President."

The law has been interpreted to mean an administration could omit notes, emails or other documentation from what it sends to the compilation. While many presidents do not provide transcripts for speeches at private party fundraising events, rallies covered by America's press corps likely do not fall under these exclusions.

Why does it matter?

Government documents are among the primary records of who we are as a people.

These primary records speak to Americans directly; they are not what others tell us or interpret to us about our history. The government compiles and preserves these records to give an accurate accounting of the leaders the country has chosen. They provide a shared history in full instead of an excerpt or quick clip shown in a news report.

[Deep knowledge, daily. Sign up for The Conversation's newsletter.]

Since 1981, the public has legally owned all presidential records. As soon as a president leaves office, the National Archivist gets legal custody of all of them. Presidents are generally on their honor to be good stewards of history. There is no real penalty for noncompliance.

But these public documents, which I work with constantly, have so far always been available to the public – and they've been available quickly. Internal presidential documents like memos or email have a rigorous archival procedure that lasts years before they are even accessible. I have a record of every presidential speech from 1945 to 2021 – every president since Clinton has all their public speeches available online. Until President Trump, there have been no missing public speeches in the permanent collection. By removing these speeches, Trump is creating a false perception of his presidency, making it look more serious and traditional.

And by the way: That 2017 Louisville speech is still missing from the records in 2021.The Conversation

Shannon Bow O'Brien, Assistant Professor of Instruction, The University of Texas at Austin College of Liberal Arts

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Long live the monarchy! British royals tend to survive a full three decades longer than their subjects

In the U.K. it is customary to receive a personalized message from the queen on your 100th birthday – such is the relative rarity of reaching the milestone.

Prince Philip was just a couple months off, dying at the age of 99 years and 10 months on April 9, 2021. The last notable royal death before his was that of the queen mother in 2002. She was 101 years old.

Reaching such a ripe old age isn't uncommon among the British ruling family – in fact, my analysis shows that on average they live an additional 30 years compared with their subjects.

I looked at the duration of life of the last six British monarchs, along with the longevity of their spouses and children – in total 27 royals. What it reveals is a fascinating and familiar story for those of us who study aging and longevity for a living. As a professor of epidemiology and biostatistics, I had previously observed the exact same phenomenon among U.S. presidents – they also tend to live decades longer than the general population they serve.

An age-old story

The ruling U.K. monarchs from Queen Victoria onward lived an average of 75 years. And this longevity will continue to rise with each day that Queen Elizabeth II – currently age 95 – lives. Their spouses survived even longer, reaching an average age of 83.5 years. If Victoria's husband Prince Albert, who died of suspected typhoid fever at age 42 in 1861, is removed from the equation, the average duration of the life of the spouses of the monarchs was an astonishing 91.7 years.

By contrast, the average life duration of the wider U.K. population for the years the monarchs were born throughout this period was only 46 years, according to figures from the Human Mortality Database. For example, the typical life expectancy at birth for a female in the U.K. in 1819 was just under 41 years. Queen Victoria, also born in 1819, was 81 when she died. By the time Elizabeth II was born in 1926, life expectancy at birth for females in the U.K. had risen to 62 – the queen has already surpassed that by some 33 years.

Such differences in lifespan – with some members of the royal family living to an age double that expected of the general population – are considered in aging circles to be extremely large, but not uncommon.

Lifespan differences of this magnitude are the result of a combination of genetic as well as social and behavioral influences.

No one can live long without first having won the genetic lottery at birth. To maximize the chances of achieving exceptional longevity – upward of 85 years old – you must begin by being lucky enough to have long-lived parents. But even for those blessed with the gift at birth of the potential for a long life, this is no guarantee you'll outlive your contemporaries.

The next challenge is to avoid behaviors that shorten life. That list is long – it is a lot easier to shorten life than extend it – but among the most well known are smoking, eating in excess and lack of exercise.

And then there is the influence of poverty and privilege. Being born into or living in poverty has been shown to be one of the most important factors that shortens lifespan – and it is here that perhaps the royals have the greatest advantage.

Further evidence of privilege being a crucial ingredient in the recipe for exceptional longevity can be seen in the fact that the children of the last six U.K. monarchs that died from natural causes lived an average of 69.7 years. This is some 23 years more than the average age of British subjects over that period.

A privileged existence

Put simply, British monarchs and their families live so much longer than their subjects for the same reason other subgroups of the population across the globe live longer than contemporaries born in the same year: privilege over poverty. A famous study conducted in Manchester, England, in 2017 demonstrated vast differences in life expectancy depending on the conditions of where people lived. Access to higher education and economic status was directly correlated with longer life, while lower education, income and poverty were linked to shorter lives.

In the U.S., similar studies of life expectancy by county, census tract and zip code demonstrated the same phenomenon. In fact, there are multiple instances of dramatic differences in longevity among people living as close as across the street from each other – caused by differences in poverty and privilege.

Differences in duration of life are first defined by genetics, but it is then heavily mediated by education, income, health care, clean water, food, indoor living and working environments, and the overall effects of high or low socioeconomic status.

The long life of Prince Philip is a cause for celebrating the progress of medical science in being able to keep people alive for longer. But it is in part the result of a privilege denied to many and a reminder that humanity has a long way to go to equalize the chances of living a long life.

[You need to understand the coronavirus pandemic, and we can help. Read The Conversation's newsletter.]The Conversation

S. Jay Olshansky, Professor of Epidemiology and Biostatistics, University of Illinois at Chicago

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Northern Ireland, born of strife 100 years ago, again erupts in political violence

Sectarian rioting has returned to the streets of Northern Ireland, just weeks shy of its 100th anniversary as a territory of the United Kingdom.

For several nights, young protesters loyal to British rule – fueled by anger over Brexit, policing and a sense of alienation from the U.K. – set fires across the capital of Belfast and clashed with police. Scores have been injured.

U.K. Prime Minister Boris Johnson, calling for calm, said “the way to resolve differences is through dialogue, not violence or criminality."

But Northern Ireland was born of violence.

Deep divisions between two identity groups – broadly defined as Protestant and Catholic – have dominated the country since its very founding. Now, roiled anew by the impact of Brexit, Northern Ireland is seemingly moving in a darker and more dangerous direction.

Colonization of Ireland

The island of Ireland, whose northernmost part lies a mere 13 miles from Britain, has been contested territory for at least nine centuries.

Britain long gazed with colonial ambitions on its smaller Catholic neighbor. The 12th-century Anglo-Norman invasion first brought the neighboring English to Ireland.

In the late 16th century, frustrated by continuing native Irish resistance, Protestant England implemented an aggressive plan to fully colonize Ireland and stamp out Irish Catholicism. Known as “plantations," this social engineering exercise “planted" strategic areas of Ireland with tens of thousands of English and Scottish Protestants.

Plantations offered settlers cheap woodland and bountiful fisheries. In exchange, Britain established a base loyal to the British crown – not to the Pope.

England's most ambitious plantation strategy was carried out in Ulster, the northernmost of Ireland's provinces. By 1630, according to the Ulster Historical Foundation, there were about 40,000 English-speaking Protestant settlers in Ulster.

Though displaced, the native Irish Catholic population of Ulster was not converted to Protestantism. Instead, two divided and antagonistic communities – each with its own culture, language, political allegiances, religious beliefs and economic histories – shared one region.

Whose Ireland is it?

Over the next two centuries, Ulster's identity divide transformed into a political fight over the future of Ireland.

“Unionists" – most often Protestant – wanted Ireland to remain part of the United Kingdom. “Nationalists" – most often Catholic – wanted self-government for Ireland.

These fights played out in political debates, the media, sports, pubs – and, often, in street violence.

Drawing of people in suits fleeing soldiers with guns

British soldiers suppress a riot in Belfast in 1886.

Hulton Archive/Getty Images

By the early 1900s, a movement of Irish independence was rising in the south of Ireland. The nationwide struggle over Irish identity only intensified the strife in Ulster.

The British government, hoping to appease nationalists in the south while protecting the interests of Ulster unionists in the north, proposed in 1920 to partition Ireland into two parts: one majority Catholic, the other Protestant-dominated – but both remaining within the United Kingdom.

Irish nationalists in the south rejected that idea and carried on with their armed campaign to separate from Britain. Eventually, in 1922, they gained independence and became the Irish Free State, today called the Republic of Ireland.

In Ulster, unionist power-holders reluctantly accepted partition as the best alternative to remaining part of Britain. In 1920, the Government of Ireland Act created Northern Ireland, the newest member of the United Kingdom.

A troubled history

In this new country, native Irish Catholics were now a minority, making up less than a third of Northern Ireland's 1.2 million people.

Stung by partition, nationalists refused to recognize the British state. Catholic schoolteachers, supported by church leaders, refused to take state salaries.

And when Northern Ireland seated its first parliament in May 1921, nationalist politicians did not take their elected seats in the assembly. The Parliament of Northern Ireland became, essentially, Protestant – and its pro-British leaders pursued a wide variety of anti-Catholic practices, discriminating against Catholics in public housing, voting rights and hiring.

By the 1960s, Catholic nationalists in Northern Ireland were mobilizing to demand more equitable governance. In 1968, police responded violently to a peaceful march to protest inequality in the allocation of public housing in Derry, Northern Ireland's second-largest city. In 60 seconds of unforgettable television footage, the world saw water cannons and baton-wielding officers attack defenseless marchers without restraint.

On Jan. 30, 1972, during another civil rights march in Derry, British soldiers opened fire on unarmed marchers, killing 14. This massacre, known as Bloody Sunday, marked a tipping point. A nonviolent movement for a more inclusive government morphed into a revolutionary campaign to overthrow that government and unify Ireland.

The Irish Republican Army, a nationalist paramilitary group, used bombs, targeted assassinations and ambushes to pursue independence from Britain and reunification with Ireland.

Black-and-white image of armed police occupying a smoky city street

The city of Derry effectively became a war zone at times in 1969.

Independent News and Media/Getty Images)

Longstanding paramilitary groups that were aligned with pro-U.K. political forces reacted in kind. Known as loyalists, these groups colluded with state security forces to defend Northern Ireland's union with Britain.

Euphemistically known as “the troubles," this violence claimed 3,532 lives from 1968 to 1998.

Brexit hits hard

The troubles subsided in April 1998 when the British and Irish governments, along with major political parties in Northern Ireland, signed a landmark U.S.-brokered peace accord. The Good Friday Agreement established a power-sharing arrangement between the two sides and gave the Northern Irish parliament more authority over domestic affairs.

The peace agreement made history. But Northern Ireland remained deeply fragmented by identity politics and paralyzed by dysfunctional governance, according to my research on risk and resilience in the country.

Violence has periodically flared up since.

Protester throws a hubcap at police, who stand in a line wearing full riot gear

Protesters and police face off in Belfast on April 8, 2021.

Charles McQuillan/Getty Images

Then, in 2020, came Brexit. Britain's negotiated withdrawal from the European Union created a new border in the Irish Sea that economically moved Northern Ireland away from Britain and toward Ireland.

Leveraging the instability caused by Brexit, nationalists have renewed calls for a referendum on formal Irish reunification.

For unionists loyal to Britain, that represents existential threat. Young loyalists born after the height of the troubles are particularly fearful of losing a British identity that has always been theirs.

Recent spasms of street disorder suggest they will defend that identity with violence, if necessary. In some neighborhoods, nationalist youths have countered with violence of their own.

In its centenary year, Northern Ireland teeters on the edge of a painfully familiar precipice.

[You're smart and curious about the world. So are The Conversation's authors and editors. You can read us daily by subscribing to our newsletter.]The Conversation

James Waller, Cohen Professor of Holocaust and Genocide Studies, Keene State College

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How worried should you be about coronavirus variants? A virologist explains his concerns

Spring has sprung, and there is a sense of relief in the air. After one year of lockdowns and social distancing, more than 171 million COVID-19 vaccine doses have been administered in the U.S. and about 19.4% of the population is fully vaccinated. But there is something else in the air: ominous SARS-CoV-2 variants.

I am a virologist and vaccinologist, which means that I spend my days studying viruses and designing and testing vaccine strategies against viral diseases. In the case of SARS-CoV-2, this work has taken on greater urgency. We humans are in a race to become immune against this cagey virus, whose ability to mutate and adapt seems to be a step ahead of our capacity to gain herd immunity. Because of the variants that are emerging, it could be a race to the wire.

A variant in Brazil is overwhelming the country's health care system.

Five variants to watch

RNA viruses like SARS-CoV-2 constantly mutate as they make more copies of themselves. Most of these mutations end up being disadvantageous to the virus and therefore disappear through natural selection.

Occasionally, though, they offer a benefit to the mutated or so-called genetic-variant virus. An example would be a mutation that improves the ability of the virus to attach more tightly to human cells, thus enhancing viral replication. Another would be a mutation that allows the virus to spread more easily from person to person, thus increasing transmissibility.

None of this is surprising for a virus that is a fresh arrival in the human population and still adapting to humans as hosts. While viruses don't think, they are governed by the same evolutionary drive that all organisms are – their first order of business is to perpetuate themselves.

These mutations have resulted in several new SARS-CoV-2 variants, leading to outbreak clusters, and in some cases, global spread. They are broadly classified as variants of interest, concern or high consequence.

Currently there are five variants of concern circulating in the U.S.: the B.1.1.7, which originated in the U.K.; the B.1.351., of South African origin; the P.1., first seen in Brazil; and the B.1.427 and B.1.429, both originating in California.

Each of these variants has a number of mutations, and some of these are key mutations in critical regions of the viral genome. Because the spike protein is required for the virus to attach to human cells, it carries a number of these key mutations. In addition, antibodies that neutralize the virus typically bind to the spike protein, thus making the spike sequence or protein a key component of COVID-19 vaccines.

India and California have recently detected “double mutant" variants that, although not yet classified, have gained international interest. They have one key mutation in the spike protein similar to one found in the Brazilian and South African variants, and another already found in the B.1.427 and B.1.429 California variants. As of today, no variant has been classified as of high consequence, although the concern is that this could change as new variants emerge and we learn more about the variants already circulating.

More transmission and worse disease

These variants are worrisome for several reasons. First, the SARS-CoV-2 variants of concern generally spread from person to person at least 20% to 50% more easily. This allows them to infect more people and to spread more quickly and widely, eventually becoming the predominant strain.

For example, the B.1.1.7 U.K. variant that was first detected in the U.S. in December 2020 is now the prevalent circulating strain in the U.S., accounting for an estimated 27.2% of all cases by mid-March. Likewise, the P.1 variant first detected in travelers from Brazil in January is now wreaking havoc in Brazil, where it is causing a collapse of the health care system and led to at least 60,000 deaths in the month of March.

Second, SARS-CoV-2 variants of concern can also lead to more severe disease and increased hospitalizations and deaths. In other words, they may have enhanced virulence. Indeed, a recent study in England suggests that the B.1.1.7 variant causes more severe illness and mortality.

Another concern is that these new variants can escape the immunity elicited by natural infection or our current vaccination efforts. For example, antibodies from people who recovered after infection or who have received a vaccine may not be able to bind as efficiently to a new variant virus, resulting in reduced neutralization of that variant virus. This could lead to reinfections and lower the effectiveness of current monoclonal antibody treatments and vaccines.

Researchers are intensely investigating whether there will be reduced vaccine efficacy against these variants. While most vaccines seem to remain effective against the U.K. variant, one recent study showed that the AstraZeneca vaccine lacks efficacy in preventing mild to moderate COVID-19 due to the B.1.351 South African variant.

On the other hand, Pfizer recently announced data from a subset of volunteers in South Africa that supports high efficacy of its mRNA vaccine against the B.1.351 variant. Other encouraging news is that T-cell immune responses elicited by natural SARS-CoV-2 infection or mRNA vaccination recognize all three U.K., South Africa, and Brazil variants. This suggests that even with reduced neutralizing antibody activity, T-cell responses stimulated by vaccination or natural infection will provide a degree of protection against such variants.

Stay vigilant, and get vaccinated

What does this all mean? While current vaccines may not prevent mild symptomatic COVID-19 caused by these variants, they will likely prevent moderate and severe disease, and in particular hospitalizations and deaths. That is the good news.

However, it is imperative to assume that current SARS-CoV-2 variants will likely continue to evolve and adapt. In a recent survey of 77 epidemiologists from 28 countries, the majority believed that within a year current vaccines could need to be updated to better handle new variants, and that low vaccine coverage will likely facilitate the emergence of such variants.

What do we need to do? We need to keep doing what we have been doing: using masks, avoiding poorly ventilated areas, and practicing social distancing techniques to slow transmission and avert further waves driven by these new variants. We also need to vaccinate as many people in as many places and as soon as possible to reduce the number of cases and the likelihood for the virus to generate new variants and escape mutants. And for that, it is vital that public health officials, governments and nongovernmental organizations address vaccine hesitancy and equity both locally and globally.

[Insight, in your inbox each day. You can get it with The Conversation's email newsletter.]The Conversation

Paulo Verardi, Associate Professor of Virology and Vaccinology, University of Connecticut

This article is republished from The Conversation under a Creative Commons license. Read the original article.

There’s a surprising ending to all the 2020 election conflicts over absentee ballot deadlines

One of the most heavily contested voting-policy issues in the 2020 election, in both the courts and the political arena, was the deadline for returning absentee ballots.

Going into the election, the policy in a majority of states was that ballots had to be received by election night to be valid. Lawsuits seeking an extension of these deadlines were brought around the country for two reasons: First, because of the pandemic, the fall election would see a massive surge in absentee ballots; and second, there were concerns about the competence and integrity of the U.S. Postal Service, particularly after President Trump appointed a major GOP donor as the new postmaster general.

The issue produced the Supreme Court's most controversial decision during the general election, which prohibited federal courts from extending the ballot-receipt deadlines in state election codes. Now that the data are available, a post-election audit provides perspective on what the actual effects of these deadlines turned out to be.

Perhaps surprisingly, the number of ballots that came in too late to be valid was extremely small, regardless of what deadline states used, or how much that deadline shifted back and forth in the months before the election. The numbers were nowhere close to the number of votes that could have changed the outcome of any significant race.

Changing deadlines in Wisconsin

Take Wisconsin and Minnesota, two important states that were the site of two major court controversies over these issues. In both, voters might be predicted to be the most confused about the deadline for returning absentee ballots, because those deadlines kept changing.

In Wisconsin, state law required absentee ballots to be returned by Election Night. The federal district court ordered that deadline extended by six days. But the Supreme Court, in a 5-3 decision, blocked the district's court order and required the deadline in the state's election code to be respected.

Justice Elena Kagan warned of the perilous effects of not extending deadlines for the return of absentee votes.

Supreme Court Justice Elena Kagan warned in a dissent on an absentee ballot case from Wisconsin that 'tens of thousands of Wisconsinites, through no fault of their own,' would be disenfranchised by the court's ruling.

Chip Somodevilla/Getty Images

Writing for the three dissenters, Justice Elena Kagan invoked the district court's prediction that as many as 100,000 voters would lose their right to vote, through no fault of their own, as a result of the majority's ruling that the normal state-law deadline had to be followed. Commentators called this a “disastrous ruling" that “would likely disenfranchise tens of thousands" of voters in this key state.

The post-election audit now provides perspective on this controversy that sharply divided the court. Ultimately, only 1,045 absentee ballots were rejected in Wisconsin for failing to meet the Election Night deadline. That amounts to 0.05% ballots out of 1,969,274 valid absentee votes cast, or 0.03% of the total vote in Wisconsin.

If we put this in partisan terms and take Biden as having won roughly 70% of the absentee vote nationwide, that means he would have added 418 more votes to his margin of victory had these late-arriving ballots been valid.

Changing deadlines in Minnesota

The fight over ballot deadlines in Minnesota was even more convoluted. If voters were going to be confused anywhere about these deadlines, with lots of ballots coming in too late as a result, it might have been expected to be here.

State law required valid ballots to be returned by Election Night, but as a result of litigation challenging that deadline, the secretary of state had agreed in early August that ballots would be valid if they were received up to seven days later.

But a mere five days before the election, a federal court pulled the rug out from under Minnesota voters. On Oct. 29, it held that Minnesota's secretary of state had violated the federal Constitution and had no power to extend the deadline. The original Election Night deadline thus snapped back into effect at the very last minute.

Yet it turns out that only 802 ballots, out of 1,929,945 absentees cast (0.04%), were rejected for coming in too late.

Even though voting-rights plaintiffs lost their battles close to Election Day in both Wisconsin and Minnesota, with the deadlines shifting back and forth, only a tiny number of ballots arrived too late.

Where deadlines didn't change

What happened in states that had a consistent policy throughout the run-up to the election that required ballots to be returned by Election Night?

Among battleground states, Michigan provides an example. Only 3,328 ballots arrived after Election Day, too late to be counted, which was 0.09% of the total votes cast there.

Finally, Pennsylvania and North Carolina were two states in which litigation did succeed in generating decisions that overrode the state election code and pushed ballot-receipt deadlines back – in Pennsylvania by three days, in North Carolina by six days.

These decisions provoked intense political firestorms in some quarters, particularly regarding Pennsylvania. The Pennsylvania Supreme Court's three-day extension of the deadline became the primary justification that some Republican senators and representatives offered on Jan. 6 for objecting to counting the state's Electoral College votes.

How many voters took advantage of these extended deadlines? In North Carolina, according to information that the state Board of Elections provided to me, 2,484 ballots came in during the additional six days after Election Day that the judicial consent decree added. That comes to 0.04% of the total valid votes cast in the state.

In Pennsylvania, about 10,000 ballots came in during the extended deadline window, out of the 2,637,065 valid absentee ballots. That's 0.14% of the total votes cast there. These 10,000 ballots were not counted in the state's certified vote total, but had they been, Biden would likely have added around 5,000 votes to his margin of victory, given that he won about 75% of the state's absentee vote.

These are not the numbers of ballots, of course, that would have come in late had the courts refused to extend the deadline in these two states. They show the maximum number that arrived after Election Day when voters had every right to return their ballots this late. Even so, those numbers are still far lower than the 100,000 that had been predicted in Wisconsin.

But had the statutory deadlines remained in place in Pennsylvania and North Carolina, there is no reason to think the number of late absentees would have been much different from those in similar swing states like Michigan, where the statutory deadlines remained fixed and 0.09% of ballots arrived too late.

A combination of many photos showing ballots on Election Day 2020.

Across the country, only a small number of absentee ballots came in after the legal deadlines.

George Frey, Kena Betancur, Jason Redmond, Jeff Kowalsky/AFP via Getty Images

Highly engaged voters

The small number of absentee ballots that came in after the legal deadlines occurred despite a massive surge in absentee voting in nearly all states. What explains that?

Voters were highly engaged, as the turnout rate showed. They were particularly attuned to the risk of delays in the mail from seeing this problem occur in the primaries. Throughout the weeks before the election, voters were consistently returning absentee ballots at higher rates than in previous elections.

The communications efforts of the Biden campaign and the state Democratic parties, whose voters cast most of these absentee votes, got the message across about these state deadlines. Election officials did a good job of communicating these deadlines to voters. In some states, drop boxes that permitted absentee ballots to be returned without using the mail might have helped minimize the number of late arriving ballots, though we don't have any empirical analysis on that.

In a highly mobilized electorate, it turns out that the specific ballot-return deadlines, and whether they shifted even late in the day, did not lead to large numbers of ballots coming in too late.

That's a tribute to voters, election officials, grassroots groups – and to the campaigns.The Conversation

Richard Pildes, Professor of Constitutional Law, New York University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Christian nationalism is a barrier to mass vaccination against COVID-19

While the majority of Americans either intend to get the COVID-19 vaccine or have already received their shots, getting white evangelicals to vaccination sites may prove more of a challenge – especially those who identify as Christian nationalists.

A Pew Research Center survey conducted in February found white evangelicals to be the religious group least likely to say they'd be vaccinated against the coronavirus. Nearly half (45%) said they would not get the COVID-19 shot, compared with 30% of the general population.

Some evangelicals have even linked coronavirus vaccinations to the “mark of the beast" – a symbol of submission to the Antichrist found in biblical prophecies, Revelation 13:18.

As a scholar of religion and society, I know that this skepticism among evangelicals has a background. Suspicion from religious conservatives regarding the COVID-19 vaccine is built on the back of their growing distrust of science, medicine and the global elite.

'Anti-mask, anti-social distance, anti-vaccine'

Vaccine hesitancy is not restricted to immunization over COVID-19. In 2017, the Pew Research Center found that more than 20% of white evangelicals – more than any other group – believed that “parents should be able to decide not to vaccinate their children, even if that may create health risks for other children and adults."

Meanwhile, there are concerns that many white evangelicals are becoming more radical. Faith is not in itself an indication of extremism, but the attack on the Capitol on Jan. 6 showed that there is a problem when it comes to some evangelicals also holding extreme beliefs. White evangelicalism, in particular, has been susceptible to Christian nationalism – the belief that the U.S. is a Christian nation that should serve the interests of white Americans.

Those who identify as Christian nationalists believe they are God's chosen people and will be protected from any illness or disease.

This proves problematic when it comes to vaccinations. A study earlier this year found Christian nationalists were far more likely to abstain from taking the COVID-19 vaccine. It builds on research that found Christian nationalism was a leading predictor of ignoring precautionary behaviors regarding coronavirus.

Christian nationalists tend to place vaccinations within a worldview that generally distrusts science and scientists as a threat to the moral order. This was seen in the response of many on the religious right to guidance on masks and social distancing as well as, now, vaccines.

And in some cases it was driven by church leaders in the wider conservative evangelical community. For example, Tony Spell, a minister at the Life Tabernacle Church in Baton Rouge, Louisiana, defied authorities in holding mass church gatherings even after the state deemed them illegal. He has also rejected warnings that the pandemic is dangerous, stating, “We're anti-mask, anti-social distancing, and anti-vaccine."

He believes the vaccine is politically motivated and has used his pulpit to discourage church members from taking the vaccine.

This anti-vaccine attitude fits with the anti-government libertarianism that predominates among Christian nationalists. Many within the movement place this belief in freedom from government action within a traditional religious framework.

They feel that COVID-19 is God's divinely ordained message telling the world to change. If the government tells them to go against that idea and vaccinate, many of them they feel they are either going against God's will or that the government is violating their religious freedom.

Such a view was also seen before the vaccination rollout. White evangelicals were the least likely religious group to support mandated closures of businesses, for example.

Countering misinformation

The problem isn't just that Christian nationalist beliefs will be a considerable barrier to herd immunity. To dispel myths about the COVID-19 vaccination among conservative religious communities, church leaders need to be enlisted to communicate facts about the vaccine to their parishioners – who may trust church leaders more than scientists and the government.

For vaccination rates to be increased, messages must come from trusted people in the community. The opinion of a government official will in many instances matter far less to a Christian nationalist than advice from a church leader.

As such, I argue, faith leaders can guide their followers and use their pulpits to encourage parishioners that the vaccine is safe and in line with religious doctrines.

To enable this, church leaders need to both understand and communicate to parishioners the origins of the vaccine. Many evangelicals are under the mistaken impression that vaccines were developed using fresh fetal tissue and are immensely troubled by that fact.
In reality, none of the vaccinations for COVID-19 available in the U.S. was manufactured using new fetal stem cells, but the Johnson & Johnson one was developed using lab-created stem cell lines derived from a decades-old aborted fetus. Many evangelical churches have determined that it is ethical for anti-abortion Christians to take the other vaccines when there are no other options for the preservation of life.

Some within the wider evangelical movement have begun sounding the alarm over the influence of radicalized Christian nationalism.

After the Jan. 6, 2021, attack on the Capitol, a coalition of evangelical leaders published an open letter warning: “We recognize that evangelicalism, and white evangelicalism in particular, has been susceptible to the heresy of Christian nationalism because of a long history of faith leaders accommodating white supremacy."

And many high-profile evangelical leaders acknowledge that they can maintain their personal and biblical integrity while also supporting scientific breakthroughs by connecting what they see as the wonders of God's universe to science.

For example, Francis Collins, head of the National Institutes of Health and a devoted evangelical Christian, has said: “The church, in this time of confusion, ought to be a beacon, a light on the hill, an entity that believes in truth."

“This is a great moment for the church to say, no matter how well intentioned someone's opinions may be, if they're not based upon the fact, the church should not endorse them."

[The Conversation's most important coronavirus headlines, weekly in a new science newsletter.]The Conversation

Monique Deal Barlow, Doctoral Student of Political Science, Georgia State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Biden wants corporations to pay for his $2 trillion infrastructure plans, echoing a history of calls for companies to chip in when times are tough

President Joe Biden just proposed a roughly US$2 trillion infrastructure plan, which he ambitiously compared to the interstate highway system and the space race. He aims to pay for it solely by taxing companies more, including the first increase in the corporate tax rate since the 1960s.

Biden said he wants to increase the rate from 21% to 28% – which would still be below the 35% level it was at before the 2017 tax cut – and strengthen the global minimum tax to discourage multinational corporations from using tax havens. Together, he estimates it would raise the necessary funds to finance his plan over 15 years.

“No one should be able to complain about" raising the rate to 28%, Biden said in a speech announcing the plan. “It's still lower than what that rate was between World War II and 2017."

As an expert on tax policy, I believe he's got a point.

What's more, I think the president's plan appeals to the basic principle of tax fairness that the corporate income tax was founded on: The taxes a person or business pays should be commensurate with the benefits they receive from public spending. And companies receive quite a lot.

History of the corporate tax

Prior to the 20th century, the federal government funded itself primarily with tariffs and excise taxes on goods such as alcohol and tobacco.

The first corporate income tax was signed by President Abraham Lincoln in 1862 to help fund the Civil War and then phased out in the 1870s.

As the U.S. grew in the early 20th century, policymakers worried about the economic and trade risks of relying too heavily on high tariffs. So in 1909 they created the corporate income tax that we know today, almost as an afterthought, in a bill that was designed to reform tariffs.

Corporate taxes did not become a major part of the U.S. tax system until they were used to help finance World Wars I and II. Before 1916, the rate was just 1% but grew to 12% during WWI and ballooned to 40% during WWII. Congress also passed “excess profits" taxes of up to 95% to curb wartime profiteering in certain industries.

Corporate tax revenue as a share of total government revenue peaked during the war, in 1943, at just under 40%.

After the war, excess profits taxes were eliminated, but lawmakers kept the regular rate high and raised it to 52% in 1951 as the U.S. entered the Korean War.

The thinking on corporate taxes began to change in the 1960s. In his 1963 State of the Union address, President John F. Kennedy proposed corporate tax cuts, arguing that they would “encourage the initiative and risk-taking on which our free system depends – induce more investment, production, and capacity use – help provide the two million new jobs we need every year – and reinforce the American principle of additional reward for additional effort."

Shortly after JFK's assassination, Congress passed his Revenue Act of 1964, which lowered the corporate rate to 48%.

But the high costs of the Vietnam War led Lyndon B. Johnson to add a temporary surcharge in 1968, which raised the rate to a high of 52.8% before being lowered back to 48% by 1971.

After that, the corporate tax rate began its 50-year decline as successive administrations, especially Republican ones, gradually chipped away at it. As a result, by 2020, corporations were covering just 7% of government revenue – compared with the 33% borne by individuals and families – even as they rake in record profits.

This same trend has been seen around the world as globalization prompted many countries such as China, Japan and European Union member states to lower business taxes in a global “race to the bottom." In 2016, corporate tax rates made up just 9% of total government revenue on average in countries in the Organization for Economic Cooperation and Development.

[Get the best of The Conversation, every weekend. Sign up for our weekly newsletter.]

The benefits principle

It seems that after 50 years of falling corporate tax rates, this trend may finally be coming to end – at least in the U.S.

Although Republicans in Congress remain firmly opposed to any increase in taxes to pay for infrastructure spending, the public seems to be on Biden's side. A 2019 Gallup poll found that more than two-thirds of Americans believe that corporations pay less than their fair share in taxes. More recently, 47% of those polled on early reports on Biden's infrastructure plan said they'd be “more likely" to support it if it was paid for with corporate tax increases, while 31% said it would have no impact on their views. Only 21% said it would make them less likely to support it.

I believe the popular appeal of Biden's plan is that it reflects the benefits principle, which states that a tax bill should be based on the value of the benefits a person or company receives from the government.

And that's basically the reasoning behind the corporate tax created in 1909.

A 1911 Supreme Court decision that upheld the constitutionality of the corporate income tax explicitly invoked the benefits principle, arguing that with incorporation owners of a business enjoy “distinctive privilege[s]" — such as protection from individual liability and the ability to sell stock — “which do not exist when the same business is conducted by private individuals or partnerships.

"It is this distinctive privilege which is the subject of taxation."

Decades of research show that businesses benefit tremendously from being located in countries with political stability and inclusive institutions that promote the rule of law, protection of property, due process and democratic participation. Companies based in the U.S., perhaps more than anywhere else, enjoy all these privileges.

And that's before we even get to what government directly spends money on. Companies have benefited enormously from investments in mass transportation, local development, free public education, the internet and many other types of infrastructure.

Speaking of infrastructure, and Biden's plan, companies arguably benefit more than anyone from repaired bridges, upgraded electrical grids, increased broadband access, and research and development. According to a review of the research by the Congressional Research Service, public investment in core infrastructure, especially during a recession, can spur faster productivity growth and reduce long-term unemployment.

One objection to the benefits principle logic - as economists are fond of reminding us - is that businesses cannot actually pay taxes. Only people can. When a business is taxed, that tax is passed on in some combination of higher consumer prices, lower wages and lower returns to capital. Economists call this concept tax incidence and often argue that a misunderstanding of tax incidence is what leads many people to support higher taxes on business.

But economists may be missing the point. The benefits principle justification for taxing corporations relies on the logic of procedural justice - an emphasis on the fairness of the process rather than the outcome. If corporate balance sheets benefit from public investment, procedural justice dictates that their taxes should reflect that benefit, even if those costs and benefits are ultimately passed on to individuals.

The price you pay

And so it makes a lot of sense for companies that will benefit so much from these investments to pick up the tab. Investment that boosts productivity will mean big gains for corporate America.

As the recent experience with COVID-19 vaccine development has shown – in which the U.S. has invested billions of dollars to get a vaccine in record time – businesses also benefit from investment in basic science research. The sooner everyone is vaccinated, the sooner the economy and business can get back to normal.

Put simply, a reasonable level of taxation can be thought of as the price corporations pay for all the benefits of American taxpayers have given them.The Conversation

By Stephanie Leiser, Lecturer in Public Policy, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How Black poets and writers gave a voice to ‘Affrilachia’

Appalachia, in the popular imagination, stubbornly remains poor and white.

Open a dictionary and you'll see Appalachian described as a “native or inhabitant of Appalachia, especially one of predominantly Scotch-Irish, English, or German ancestry."

Read J.D. Vance's “Hillbilly Elegy" and you'll enter a world that's white, poor and uncultured, with few, if any, people of color.

But as Black poets and scholars living in Appalachia, we know that this simplified portrayal obscures a world that is far more complex. It has always been a place filled with diverse inhabitants and endowed with a lush literary history. Black writers like Effie Waller Smith have been part of this cultural landscape as far back as the 19th century. Today, Black writers and poets continue to explore what it means to be Black and from Appalachia.

Swimming against cultural currents, they have long struggled to be heard. But a turning point took place 30 years ago, when Black Appalachian culture experienced a renaissance centered around a single word: “Affrilachia."

Upending a 'single story' of Appalachia

In the 1960s, the Appalachian Regional Commission officially defined the Appalachian region as an area encompassing counties in Alabama, Georgia, Kentucky, Maryland, North Carolina, South Carolina, Pennsylvania, Tennessee, Virginia and the entirety of West Virginia. The designation brought national attention – and calls for economic equity – to an impoverished region that had largely been ignored.

When President Lyndon B. Johnson declared his “war on poverty" in 1964, it was with Appalachia in mind. However, as pernicious as the effects of poverty have been for white rural Appalachians, they've been worse for Black Appalachians, thanks to the long-term repercussions of slavery, Jim Crow laws, racial terrorism and a dearth of regional welfare programs.

Black Appalachians have long been, as poet and historian Edward J. Cabbell put it, “a neglected minority within a neglected minority."

Five Black children stand in the foreground while a white boy stands in the background.

A 1935 Farm Security Administration photograph of kids in Omar, West Virginia.

Library of Congress

Nonetheless, throughout the 20th century, Black Appalachian writers like Nikki Giovanni and Norman Jordan continued to write and wrestle with what it meant to be both Black and Appalachian.

In 1991, after a poetry reading that included Black poets from the Appalachian region, Kentucky poet Frank X. Walker decided to give a name to his experience as a Black Appalachian: “Affrilachian." It subsequently became the title of a poetry collection he released in 2000.

By coining the terms “Affrilachia" and “Affrilachian," Walker sought to upend assumptions about who is part of Appalachia. Writer Chimamanda Ngozi Adichie has spoken of the danger of the single story. When “one story becomes the only story," she said in a 2009 TED Talk, “it robs people of dignity."

Rather than accepting the single story of Appalachia as white and poor, Walker wrote a new one, forging a path for Black Appalachian artists.

It caught on.

In 2001, a number of Affrilachian poets – including Walker, Kelly Norman Ellis, Crystal Wilkinson, Ricardo Nazario y Colon, Gerald Coleman, Paul C. Taylor and Shanna Smith – were the subjects of the documentary “Coal Black Voices." In 2007, the journal Pluck! was founded out of University of Kentucky with the goal of promoting a diverse range of Affrilachian writers at the national level. In 2016, the anthology “Black Bone: 25 Years of Affrilachian Poetry" was published.

A unique style emerges

Roughly 9% of Appalachian residents are Black, and this renders many of the region's Black people “hypervisible," meaning they stick out in primarily white spaces.

Many Affrilachian poems explore this dynamic, along with the tension of participating in activities, such as hunting, that are stereotyped as being of interest only to white Americans. Food traditions, family and the Appalachian landscape are also central themes of the work.

Affrilachian poet Chanda Feldman's poem “Rabbit" touches on all of these elements.

Her poem shifts from the speaker hunting for rabbits with their father to the hunt as a larger metaphor for being Black in Appalachia – and thus seen as both predator and prey:

<code>        He told me
  of my great uncle who, Depression era,
  loaned white townspeople venison
  and preserves. Later stood off
  the same ones with a gun
  when they wanted his property.

An Affrilachian future

We reached out to Walker and asked him to reflect on the term, 30 years after he coined it.

Walker wrote back that it created a “solid foundation" that “encouraged a more diverse view of the region and its history" while increasing “opportunities for others to carve out their own space" – including other poets, musicians and visual artists of color throughout the region.

In her book “Sister Citizen," journalist and academic Melissa Harris-Perry writes, “Citizens want and need more than a fair distribution of resources: they also desire meaningful recognition of their humanity and uniqueness."

Affrilachian artistry and identity allows Appalachia to be fully seen as the diverse and culturally rich region that it is, bringing to the forefront those who have historically been pushed to the margins, out of mind and out of sight.

[You're smart and curious about the world. So are The Conversation's authors and editors. You can read us daily by subscribing to our newsletter.]The Conversation

Amy M. Alvarez, Assistant Teaching Professor, English, West Virginia University and Jameka Hartley, Instructor of Gender & Race Studies, University of Alabama

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Politicians have 'washed their hands' and blamed others since Jesus's crucifixion

Handwashing has gotten substantial coverage this past year during the COVID-19 pandemic, and not just for hygiene. You may have encountered some of the many accusations in both the U.S. and Canada that a politician has “washed his hands" of pandemic responsibilities.

Sometimes the reference includes a nod to the historical figure associated with this phrase: Recently in the U.S., a conservative commentator faulted President Joe Biden, saying he is “like Pontius Pilate: just washes his hands and stays quiet."

These handwashing images derive from iconic biblical scripture referring to events preceding Jesus's crucifixion.

In one of the earliest versions of these events, Pontius Pilate, the Roman governor of Judea from at least 26 to 37 CE — the only man with the power to order a crucifixion — washes his hands before a crowd. In the Gospel of Matthew, he simultaneously assents to Jesus's execution and claims no personal responsibility.

Throughout the history of Christianity, representations of Pilate's handwashing have often been used to shift blame for Jesus's death to Jews, and have been part of a toxic legacy of Christian and western antisemitism.

The historical Pilate

In the first century CE, the Roman empire ruled the sub-province of Judea through military governors like Pilate, who were tasked with quashing any rebellions against Roman rule. Pilate was the only person in Judea with the authority to execute someone by crucifixion, a brutal form of capital punishment reserved for slaves and non-citizens deemed subversive.

Helen Bond, professor of Christian origins explains that “the execution of Jesus was in all probability a routine crucifixion of a messianic agitator" by a Roman governor.

Jewish sources convey that Pilate was hostile toward Jews and their customs. Philo of Alexandria even lamented Pilate's “continual murders of people untried and uncondemned."

Exonerating Pilate

Yet, the New Testament gospels offer ambivalent portraits of the man who ordered Christ's execution. There are four different accounts of Jesus's sentencing and death, but all agree Pilate was reluctant to declare Jesus guilty.

Each gospel depicts Pilate finding Jesus blameless but acquiescing to execute him, whether due to personal weakness, to appease the crowds or to legitimate his own authority and the emperor's. Instead of impugning Pilate, the gospels shift the blame for Jesus's death to Jewish authorities.

Each of these gospels was written during the decades following the destruction of the Jerusalem temple by the Romans (70 CE), the climax of the First Jewish Revolt. This was a period of rampant anti-Judaism: imperialist media such as coins and monuments indiscriminately linked Jews from across the empire to the rebels in Judea and cast Jews as barbaric traitors. The empire punished all Jews, for instance, with a tax.

This created a challenge for those early followers of Jesus — both Jews and gentiles — who proclaimed that their Saviour was a Jew whom Rome executed as a criminal. The gospel authors stressed that Jesus opposed the Jewish authorities and was not found guilty by the Roman governor.

Jewish and gentile Jesus followers

How to understand depictions of “Jews" in gospels written before the self-identification “Christian" became widespread in the early second century is thus immensely complicated. The Gospel of John, for instance, emerged from a gentile community. It never uses the term “Christian" yet distinguishes followers of Christ from Jews through hostile rhetoric demonizing “the Jews" as children of the devil, as the New Testament scholar Adele Reinhartz has shown.

Matthew's gospel, however, was produced by a community of Christ-followers who more clearly fit within the spectrum of Jewish identities, yet were eager to distinguish themselves from Jewish leaders who had been involved in the revolt and post-war Jewish leaders (namely, the rabbis). In this case, rhetorical attacks against certain Jewish leaders reflect an inter-sectarian argument among Jews.

Transferring guilt

The pattern of exonerating Pilate by blaming Jewish leaders is unmistakable in Matthew's gospel. It includes a “blood curse" that is the basis of a toxic formula that Christians have used to justify centuries of Christian anti-Judaism, often resulting in reprehensible acts of violence against Jews: “So when Pilate saw that he could do nothing … he took some water and washed his hands … saying, 'I am innocent of this man's blood; see to it yourselves.' Then the people as a whole answered, 'His blood be on us and on our children!'"

Matthew also writes “the chief priests and the elders" were manipulating the crowds. He often accuses Jewish leaders of such corruption as well as hypocrisy and misunderstanding the Jewish law.

Pilate's handwashing alludes to an older account from Jewish scripture. Deuteronomy 21:1-9 prescribes a ritual through which Israel can be “absolved of bloodguilt" for a murder committed by an unknown person. Because the culprit can't be prosecuted, this ritual removes “bloodguilt," or communal liability for “innocent blood," that would otherwise remain in the midst of the people of Israel.

The rite entails the people's elders washing their hands of bloodguilt while priests break a heifer's neck. Matthew inverts Deuteronomy's ritual, and casts the priests and elders as hypocrites who invited bloodguilt onto their kinfolk.

Pilate's redemption and anti-Judaism

Through early Christian writers, Pilate became an even more positive figure by the time the Roman Empire adopted Christianity. Some considered Pilate a Christian, at least “in his conscience," as the early theologian Tertullian wrote. The Coptic Church proclaimed him a saint in the sixth century. Pilate even appears in the Niceno-Constantinopolitan creed, a Christian statement of faith: Jesus was “crucified for us under Pontius Pilate." Note the statement says “under" and not “by" Pilate.

Ancient Christian texts doubled down on the New Testament gospels' shifting of blame from Pilate to Jews, as professor of the New Testament Warren Carter has shown.

Christian authors deployed ambivalent and positive images of Pilate to show that Christianity was not a threat to Roman law and order. In doing so, they fanned the flames of anti-Judaism. Art historian Colum Hourihane has explored how these anti-Jewish interpretations eventually led to negative characterizations of Pilate himself as a Jew during the medieval period in Europe. At this time, Christians blamed Jews for plagues.

Keep reading... Show less

Ayn Rand-inspired 'myth of the founder' puts tremendous power in hands of Big Tech CEOs like Zuckerberg – posing real risks to democracy

Coinbase's plan to go public in April highlights a troubling trend among tech companies: Its founding team will maintain voting control, making it mostly immune to the wishes of outside investors.

The best-known U.S. cryptocurrency exchange is doing this by creating two classes of shares. One class will be available to the public. The other is reserved for the founders, insiders and early investors, and will wield 20 times the voting power of regular shares. That will ensure that after all is said and done, the insiders will control 53.5% of the votes.

Coinbase will join dozens of other publicly traded tech companies – many with household names such as Google, Facebook, Doordash, Airbnb and Slack – that have issued two types of shares in an effort to retain control for founders and insiders. The reason this is becoming increasingly popular has a lot to do with Ayn Rand, one of Silicon Valley's favorite authors, and the “myth of the founder" her writings have helped inspire.

Engaged investors and governance experts like me generally loathe dual-class shares because they undermine executive accountability by making it harder to rein in a wayward CEO. I first stumbled upon this method executives use to limit the influence of pesky outsiders while working on my doctoral dissertation on hostile takeovers in the late 1980s.

But the risks of this trend are greater than simply entrenching bad management. Today, given the role tech companies play in virtually every corner of American life, it poses a threat to democracy as well.

All in the family

Dual-class voting structures have been around for decades.

When Ford Motor Co. went public in 1956, its founding family used the arrangement to maintain 40% of the voting rights. Newspaper companies like The New York Times and The Washington Post often use the arrangement to protect their journalistic independence from Wall Street's insatiable demands for profitability.

In a typical dual-class structure, the company will sell one class of shares to the public, usually called class A shares, while founders, executives and others retain class B shares with enough voting power to maintain majority voting control. This allows the class B shareholders to determine the outcome of matters that come up for a shareholder vote, such as who is on the company's board.

Advocates see a dual-class structure as a way to fend off short-term thinking. In principle, this insulation from investor pressure can allow the company to take a long-term perspective and make tough strategic changes even at the expense of short-term share price declines. Family-controlled businesses often view it as a way to preserve their legacy, which is why Ford remains a family company after more than a century.

It also makes a company effectively immune from hostile takeovers and the whims of activist investors.

Checks and balances

But this insulation comes at a cost for investors, who lose a crucial check on management.

Indeed, dual-class shares essentially short-circuit almost all the other means that limit executive power. The board of directors, elected by shareholder vote, is the ultimate authority within the corporation that oversees management. Voting for directors and proposals on the annual ballot are the main methods shareholders have to ensure management accountability, other than simply selling their shares.

Recent research shows that the value and stock returns of dual-class companies are lower than other businesses, and they're more likely to overpay their CEO and waste money on expensive acquisitions.

Companies with dual-class shares rarely made up more than 10% of public listings in a given year until the 2000s, when tech startups began using them more frequently, according to data collected by University of Florida business professor Jay Ritter. The dam began to break after Facebook went public in 2012 with a dual-class stock structure that kept founder Mark Zuckerberg firmly in control – he alone controls almost 60% of the company.

In 2020, over 40% of tech companies that went public did so with two or more classes of shares with unequal voting rights.

This has alarmed governance experts, some investors and legal scholars.

Ayn Rand and the myth of the superhuman founder

If the dual-class structure is bad for investors, then why are so many tech companies able to convince them to buy their shares when they go public?

I attribute it to Silicon Valley's mythology of the founder –- what I would dub an “Ayn Rand theory of corporate governance" that credits founders with superhuman vision and competence that merit deference from lesser mortals. Rand's novels, most notably “Atlas Shrugged," portray an America in which titans of business hold up the world by creating innovation and value but are beset by moochers and looters who want to take or regulate what they have created.

Perhaps unsurprisingly, Rand has a strong following among tech founders, whose creative genius may be “threatened" by any form of outside regulation. Elon Musk, Coinbase founder Brian Armstrong and even the late Steve Jobs all have recommended “Atlas Shrugged."

Her work is also celebrated by the venture capitalists who typically finance tech startups – many of whom were founders themselves.

The basic idea is simple: Only the founder has the vision, charisma and smarts to steer the company forward.

It begins with a powerful founding story. Michael Dell and Zuckerberg created their multibillion-dollar companies in their dorm rooms. Founding partner pairs Steve Jobs and Steve Wozniak and Bill Hewlett and David Packard built their first computer companies in the garage – Apple and Hewlett-Packard, respectively. Often the stories are true, but sometimes, as in Apple's case, less so.

And from there, founders face a gantlet of rigorous testing: recruiting collaborators, gathering customers and, perhaps most importantly, attracting multiple rounds of funding from venture capitalists. Each round serves to further validate the founder's leadership competence.

The Founders Fund, a venture capital firm that has backed dozens of tech companies, including Airbnb, Palantir and Lyft, is one of the biggest proselytizers for this myth, as it makes clear in its “manifesto."

“The entrepreneurs who make it have a near-messianic attitude and believe their company is essential to making the world a better place," it asserts. True to its stated belief, the fund says it has “never removed a single founder," which is why it has been a big supporter of dual-class share structures.

Another venture capitalist who seems to favor giving founders extra power is Netscape founder Marc Andreessen. His venture capital firm Andreessen Horowitz is Coinbase's biggest investor. And most of the companies in its portfolio that have gone public also used a dual-class share structure, according to my own review of their securities filings.

Bad for companies, bad for democracy

Giving founders voting control disrupts the checks and balances needed to keep business accountable and can lead to big problems.

WeWork founder Adam Neumann, for example, demanded “unambiguous authority to fire or overrule any director or employee." As his behavior became increasingly erratic, the company hemorrhaged cash in the lead-up to its ultimately canceled initial public offering.

Investors forced out Uber's Travis Kalanick in 2017, but not before he's said to have created a workplace culture that allegedly allowed sexual harassment and discrimination to fester. When Uber finally went public in 2019, it shed its dual-class structure.

There is some evidence that founder-CEOs are less gifted at management than other kinds of leaders, and their companies' performance can suffer as a consequence.

But investors who buy shares in these companies know the risks going in. There's much more at stake than their money.

What happens when powerful, unconstrained founders control the most powerful companies in the world?

The tech sector is increasingly laying claim to central command posts of the U.S. economy. Americans' access to news and information, financial services, social networks and even groceries is mediated by a handful of companies controlled by a handful of people.

Recall that in the wake of the Jan. 6 Capitol insurrection, the CEOs of Facebook and Twitter were able to eject former President Donald Trump from his favorite means of communication – virtually silencing him overnight. And Apple, Google and Amazon cut off Parler, the right-wing social media platform used by some of the insurrectionists to plan their actions. Not all of these companies have dual-class shares, but this illustrates just how much power tech companies have over America's political discourse.

One does not have to disagree with their decision to see that a form of political power is becoming increasingly concentrated in the hands of companies with limited outside oversight.

[Deep knowledge, daily. Sign up for The Conversation's newsletter.]The Conversation

Jerry Davis, Fellow at the Center for Advanced Study in the Behavioral Sciences at Stanford and Professor of Management and Sociology, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Previously thought to be science fiction, a planet in a triple-star system has been discovered

KOI-5Ab is a newly discovered planet in a triple-star system. It is a great example of the kind of astonishing discoveries that result from co-operation between large teams of astronomers using different types of telescopes and observation techniques.

There is a stereotype that “lone genius" scientists make discoveries without any help from others. This is propagated by the prestigious Nobel Prize, which is awarded to at most two or three scientists at a time.

But major discoveries, particularly in the fields of astronomy and physics, are increasingly achieved by teams of dozens or even hundreds of scientists combining data from multiple experiments and observation techniques.

How to find an exoplanet

One of the fastest-growing areas of astronomy research is the study of planets in other solar systems, called exoplanets. As of this writing, 4,367 exoplanets have been discovered. Trying to observe an exoplanet orbiting around a distant star is a bit like trying to see a firefly crawling on a searchlight, so the vast majority of exoplanets have been discovered using a variety of clever indirect techniques.

One of these is the radial velocity technique, which has been used to discover 833 exoplanets so far. This technique measures tiny shifts in the colour of light from the star as it is gently tugged by its orbiting exoplanet.

Most of the early exoplanet discoveries were made using this technique. The first tentative detection of an exoplanet was by a Canadian team in 1988 using radial velocity. The first definite discovery of an exoplanet in 1995 earned the discoverers the 2019 Nobel Prize in Physics.

Radial velocity was first, but now more than three-quarters of the known exoplanets have been discovered using the transit technique. This technique works by measuring a star's brightness over time, watching for regularly repeated drops in brightness, which could be caused by a planet passing in front of a star during its orbit.

The transit method measures fluctuations in a star's brightness.

Thousands of planets

The Kepler Mission carefully measured the brightness of 180,000 stars every one to 30 minutes for four years using a space-based telescope. Almost 2,400 exoplanets were discovered (and over 400 more in the follow-up K2 mission). The Kepler Mission Team officially includes dozens of astronomers and support scientists, and dozens more were able to analyze the publicly available data for additional planetary discoveries.

The Kepler Mission measured its last exoplanet in 2018, and now the Transiting Exoplanet Survey Satellite (TESS) is following in its footsteps. Instead of focusing on a single patch of sky, TESS monitors several patches of sky.

The downside to the simple transit technique is that there are other astrophysical effects that can cause the same periodic drop in brightness, like background stars that vary in brightness, or starspots (like sunspots). Because of this, when interesting signals are first discovered by transit surveys, they are dispassionately numbered as “objects of interest" until they are validated as real exoplanets by another exoplanet detection technique, often radial velocity.

Right now, the TESS mission has more than two thousand objects of interest and over 100 confirmed exoplanets. The validation process is where many of the really surprising, fascinating exoplanetary systems are teased apart by impressive feats of scientific collaboration and cooperation, and the TESS and Kepler teams maintain a coordination centre to plan and share follow-up data.

Amazing exoplanet systems

Some of the really remarkable exoplanet discoveries to date include planets that orbit around a pair of stars (yes, like Tatooine in Star Wars), seven exoplanets in the same system all closer to their star than Mercury is to our sun, evaporating planets and a brown dwarf with rings that puts Saturn's to shame.

All of these discoveries required a lot of additional modelling and data collection in order to understand the systems, but one of the most complicated exoplanet systems yet was announced in January 2020.

Kepler Object of Interest 5 (KOI-5) was one of the first batch of possible exoplanets sent down by the Kepler space telescope in 2009. But the first follow-up data quickly showed the system was complicated by an additional star and weird follow-up observations. Mission astronomers were gleefully (and perhaps slightly frantically) wading through possible exoplanet discoveries, so it was put aside and the data was left in the public archive. The same system was flagged again a decade later by TESS as a TESS Object of Interest (TOI-1241).

High-resolution imaging by one team of astronomers was combined with longer time baseline radial velocity data from another team and the story began to emerge: KOI-5 was a triple-star system with an exoplanet orbiting one of the stars. This discovery was presented at the January 2021 American Astronomical Society meeting, and a peer-reviewed paper is forthcoming.

I have been a user of various public data archives for exoplanet systems in my research and work, and I fully appreciate how open data policies maximize the scientific research output that can be accomplished with each dataset.

an illustration showing the triple-star system

The KOI-5 triple-star system with its newly discovered exoplanet. (Caltech/R. Hurt, Infrared Processing and Analysis Center), Author provided

Complex orbits

Two sun-sized stars, designated A and B, orbit each other every 29 years in the middle of the system, while a third, smaller star orbits the two central stars every 400 years. The discovered planet is called KOI-5Ab, because it orbits star A, on an orbit that is tilted wildly away from the plane of the stars' orbits.

Data from Kepler and TESS, which required the effort of dozens of astronomers working together, has revealed the size of KOI-5Ab: seven times the radius of the Earth. Another team of astronomers used radial velocity data to measure the mass of KOI-5Ab: 57 times the mass of the Earth. Combining these numbers gives the density, and tells us this planet is a gas giant planet, a bit smaller and denser than Saturn.

As someone who became an astronomer because I've always loved reading science fiction stories, I like thinking about what it would be like to visit an exoplanet like this. Being a gas planet, we couldn't actually stand on the surface, but if we could hover on the edge of its atmosphere with our spaceship, what would we see?

A few exoplanets have been measured to be very dark, so imagine looking down to see dark brown and grey clouds swirling in turbulent stripes driven by ferocious winds. In the sky, you would see one sun, 17 times larger than our sun. There would also be another much smaller sun, only half a per cent as bright as our sun (which would still be a thousand times brighter than the Earth's full moon). This smaller sun would complete an orbit through the constellations in the sky every thirty years. The third star in the system would move much more slowly relative to the background stars, and despite its large distance, would still appear much brighter than the full moon in our sky.

Even in orbit over this planet, full darkness would only be available for brief snatches every couple hundred years when all three stars wandered into the same portion of the celestial sphere. This exoplanet system sounds like a science fiction story, but astronomers have been able to conclusively prove its existence.

Collaborative discovery

Astronomy is one of the better sciences for sharing data. We have the arXiv repository of freely accessible peer-reviewed papers, and standard practice is for telescope data to be publicly accessible in various databases after a short (usually one year) proprietary period.

The co-operation between astronomers using many different observation techniques has led to incredible discoveries like the KOI-5Ab system, and as long as satellites do not ruin ground-based astronomy, large team efforts and collaborations between telescope facilities will continue to produce astronomical discoveries remarkable enough to surpass science fiction.The Conversation

Samantha Lawler, Assistant professor of astronomy, University of Regina

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Mentally ill: Many QAnon followers report having mental health diagnoses

QAnon is often viewed as a group associated with conspiracy, terrorism and radical action, such as the Jan. 6 Capitol insurrection. But radical extremism and terror may not be the real concern from this group.

QAnon followers, who may number in the millions, appear to believe a baseless and debunked conspiracy theory claiming that a satanic cabal of pedophiles and cannibals controls world governments and the media. They also subscribe to many other outlandish and improbable ideas, such as that the Earth is flat, that the coronavirus is a biological weapon used to gain control over the world's population, that Bill Gates is somehow trying to use coronavirus vaccinations to implant microchips into people and more.

As a social psychologist, I normally study terrorists. During research for "Pastels and Pedophiles: Inside the Mind of QAnon," a forthcoming book I co-authored with security scholar Mia Bloom, I noticed that QAnon followers are different from the radicals I usually study in one key way: They are far more likely to have serious mental illnesses.

Significant conditions

I found that many QAnon followers revealed – in their own words on social media or in interviews – a wide range of mental health diagnoses, including bipolar disorder, depression, anxiety and addiction.

In court records of QAnon followers arrested in the wake of the Capitol insurrection, 68% reported they had received mental health diagnoses. The conditions they revealed included post-traumatic stress disorder, bipolar disorder, paranoid schizophrenia and Munchausen syndrome by proxy – a psychological disorder that causes one to invent or inflict health problems on a loved one, usually a child, in order to gain attention for themselves. By contrast, 19% of all Americans have a mental health diagnosis.

Among QAnon insurrectionists with criminal records, 44% experienced a serious psychological trauma that preceded their radicalization, such as physical or sexual abuse of them or of their children.

The psychology of conspiracy

Research has long revealed connections between psychological problems and beliefs in conspiracy theories. For example, anxiety increases conspiratorial thinking, as do social isolation and loneliness.

Depressed, narcissistic and emotionally detached people are also prone to have a conspiratorial mindset. Likewise, people who exhibit odd, eccentric, suspicious and paranoid behavior – and who are manipulative, irresponsible and low on empathy – are more likely to believe conspiracy theories.

QAnon's rise has coincided with an unfolding mental health crisis in the United States. Even before the COVID-19 pandemic, the number of diagnoses of mental illness was growing, with 1.5 million more people diagnosed in 2019 than in 2018.

The isolation of the lockdowns, compounded by the anxiety related to COVID and the economic uncertainty, made a bad situation worse. Self-reported anxiety and depression quadrupled during the quarantine and now affects as much as 40% of the U.S. population.

A more serious problem

It's possible that people who embrace QAnon ideas may be inadvertently or indirectly expressing deeper psychological problems. This could be similar to when people exhibit self-harming behavior or psychosomatic complaints that are in fact signals of serious psychological issues.

It could be that QAnon is less a problem of terrorism and extremism than it is one of poor mental health.

Only a few dozen QAnon followers are accused of having done anything illegal or violent – which means that for millions of QAnon believers, their radicalization may be of their opinions, but not their actions.

In my view, the solution to this aspect of the QAnon problem is to address the mental health needs of all Americans – including those whose problems manifest as QAnon beliefs. Many of them – and many others who are not QAnon followers – could clearly benefit from counseling and therapy.

Sophia Moskalenko, Research Fellow in Social Psychology, Georgia State University

This article is republished from The Conversation under a Creative Commons license.

Don't Sit on the Sidelines of History. Join Raw Story Investigates and Go Ad-Free. Support Honest Journalism.