A pastor from Baltimore on Monday said that he turned down an opportunity to meet with President Donald Trump at the White House after the president spent the weekend calling his city "dangerous" and "filthy."
The Baltimore Sun reports that the Rev. Donte Hickman received a last-minute invitation to meet with Trump at the White House on Monday, and he turned it down because he was "unavailable."
Hickman invited Trump to meet with him in Baltimore last year, but the White House canceled a visit that had been planned for December. Hickman did subsequently go to the White House late last year to talk with Trump about things that could be done for America's cities, and he tells the Baltimore Sun that his invitation to the city is still open.
Earlier on Monday, Trump tweeted that he was "looking forward to my meeting... with wonderful Inner City Pastors," although the meeting was not on the president's daily schedule and the president never specified the purpose of such a meeting.
Without the ability to feel pain, life is more dangerous. To avoid injury, pain tells us to use a hammer more gently, wait for the soup to cool or put on gloves in a snowball fight. Those with rare inherited disorders that leave them without the ability to feel pain are unable to protect themselves from environmental threats, leading to broken bones, damaged skin, infections and ultimately a shorter life span.
In these contexts, pain is much more than a sensation: It is a protective call to action. But pain that is too intense or long-lasting can be debilitating. So how does modern medicine soften the call?
As a neurobiologist and an anesthesiologist who study pain, this is a question we and other researchers have tried to answer. Science’s understanding of how the body senses tissue damage and perceives it as pain has progressed tremendously over the past several years. It has become clear that there are multiple pathways that signal tissue damage to the brain and sound the pain alarm bell.
Interestingly, while the brain uses different pain signaling pathways depending on the type of damage, there is also redundancy to these pathways. Even more intriguing, these neural pathways morph and amplify signals in the case of chronic pain and pain caused by conditions affecting nerves themselves, even though the protective function of pain is no longer needed.
Painkillers work by tackling different parts of these pathways. Not every painkiller works for every type of pain, however. Because of the multitude and redundancy of pain pathways, a perfect painkiller is elusive. But in the meantime, understanding how existing painkillers work helps medical providers and patients use them for the best results.
Anti-inflammatory painkillers
A bruise, sprain or broken bone from an injury all lead to tissue inflammation, an immune response that can lead to swelling and redness as the body tries to heal. Specialized nerve cells in the area of the injury called nociceptors sense the inflammatory chemicals the body produces and send pain signals to the brain.
Common over-the-counter anti-inflammatory painkillers work by decreasing inflammation in the injured area. These are particularly useful for musculoskeletal injuries or other pain problems caused by inflammation such as arthritis.
Nonsteroidal anti-inflammatories like ibuprofen (Advil, Motrin), naproxen (Aleve) and aspirin do this by blocking an enzyme called COX that plays a key role in a biochemical cascade that produces inflammatory chemicals. Blocking the cascade decreases the amount of inflammatory chemicals, and thereby reduces the pain signals sent to the brain. While acetaminophen (Tylenol), also known as paracetamol, doesn’t reduce inflammation as NSAIDs do, it also inhibits COX enzymes and has similar pain-reducing effects.
Prescription anti-inflammatory painkillers include other COX inhibitors, corticosteroids and, more recently, drugs that target and inactivate the inflammatory chemicals themselves.
Aspirin and ibuprofen work by blocking the COX enzymes that play a key role in pain-causing processes.
Because inflammatory chemicals are involved in other important physiological functions beyond just sounding the pain alarm, medications that block them will have side effects and potential health risks, including irritating the stomach lining and affecting kidney function. Over-the-counter medications are generally safe if the directions on the bottle are followed strictly.
Corticosteroids like prednisone block the inflammatory cascade early on in the process, which is probably why they are so potent in reducing inflammation. However, because all the chemicals in the cascade are present in nearly every organ system, long-term use of steroids can pose many health risks that need to be discussed with a physician before starting a treatment plan.
Topical medications
Many topical medications target nociceptors, the specialized nerves that detect tissue damage. Local anesthetics, like lidocaine, prevent these nerves from sending electrical signals to the brain.
The protein sensors on the tips of other sensory neurons in the skin are also targets for topical painkillers. Activating these proteins can elicit particular sensations that can lessen the pain by reducing the activity of the damage-sensing nerves, like the cooling sensation of menthol or the burning sensation of capsaicin.
Certain topical ointments, like menthol and capsaicin, can crowd out pain signals with different sensations.
Because these topical medications work on the tiny nerves in the skin, they are best used for pain directly affecting the skin. For example, a shingles infection can damage the nerves in the skin, causing them to become overactive and send persistent pain signals to the brain. Silencing those nerves with topical lidocaine or an overwhelming dose of capsaicin can reduce these pain signals.
Nerve injury medications
Nerve injuries, most commonly from arthritis and diabetes, can cause the pain-sensing part of the nervous system to become overactive. These injuries sound the pain alarm even in the absence of tissue damage. The best painkillers in these conditions are those that dampen that alarm.
Antiepileptic drugs, such as gabapentin (Neurontin), suppress the pain-sensing system by blocking electrical signaling in the nerves. However, gabapentin can also reduce nerve activity in other parts of the nervous system, potentially leading to sleepiness and confusion.
Antidepressants, such as duloxetine and nortriptyline, are thought to work by increasing certain neurotransmitters in the spinal cord and brain involved in regulating pain pathways. But they may also alter chemical signaling in the gastrointestinal tract, leading to an upset stomach.
All these medications are prescribed by doctors.
Opioids
Opioids are chemicals found or derived from the opium poppy. One of the earliest opioids, morphine, was purified in the 1800s. Since then, medical use of opioids has expanded to include many natural and synthetic derivatives of morphine with varying potency and duration. Some common examples include codeine, tramadol, hydrocodone, oxycodone, buprenorphine and fentanyl.
Opioids decrease pain by activating the body’s endorphin system. Endorphins are a type of opioid your body naturally produces that decreases incoming signals of injury and produces feelings of euphoria – the so-called “runner’s high.” Opioids simulate the effects of endorphins by acting on similar targets in the body.
While opioids can provide strong pain relief, they are not meant for long-term use because they are addictive.
Although opioids can decrease some types of acute pain, such as after surgery, musculoskeletal injuries like a broken leg or cancer pain, they are often ineffective for neuropathic injuries and chronic pain.
Because the body uses opioid receptors in other organ systems like the gastrointestinal tract and the lungs, side effects and risks include constipation and potentially fatal suppression of breathing. Prolonged use of opioids may also lead to tolerance, where more drug is required to get the same painkilling effect. This is why opioids can be addictive and are not intended for long-term use. All opioids are controlled substances and are carefully prescribed by doctors because of these side effects and risks.
Cannabinoids
Although cannabis has received a lot of attention for its potential medical uses, there isn’t sufficient evidence available to conclude that it can effectively treat pain. Since the use of cannabis is illegal at the federal level in the U.S., high-quality clinical research funded by the federal government has been lacking.
Researchers do know that the body naturally produces endocannabinoids, a form of the chemicals in cannabis, to decrease pain perception. Cannabinoids may also reduce inflammation. Given the lack of strong clinical evidence, physicians typically don’t recommend them over FDA-approved medications.
Matching pain to drug
While sounding the pain alarm is important for survival, dampening the klaxon when it’s too loud or unhelpful is sometimes necessary.
No existing medication can perfectly treat pain. Matching specific types of pain to drugs that target specific pathways can improve pain relief, but even then, medications can fail to work even for people with the same condition. More research that deepens the medical field’s understanding of the pain pathways and targets in the body can help lead to more effective treatments and improved pain management.
When you read a sentence like this one, your past experience tells you that it’s written by a thinking, feeling human. And, in this case, there is indeed a human typing these words: [Hi, there!] But these days, some sentences that appear remarkably humanlike are actually generated by artificial intelligence systems trained on massive amounts of human text.
People are so accustomed to assuming that fluent language comes from a thinking, feeling human that evidence to the contrary can be difficult to wrap your head around. How are people likely to navigate this relatively uncharted territory? Because of a persistent tendency to associate fluent expression with fluent thought, it is natural – but potentially misleading – to think that if an AI model can express itself fluently, that means it thinks and feels just like humans do.
Thus, it is perhaps unsurprising that a former Google engineer recently claimed that Google’s AI system LaMDA has a sense of self because it can eloquently generate text about its purported feelings. This event and the subsequent media coverage led to a number of rightly skeptical articles and posts about the claim that computational models of human language are sentient, meaning capable of thinking and feeling and experiencing.
The question of what it would mean for an AI model to be sentient is complicated (see, for instance, our colleague’s take), and our goal here is not to settle it. But as languageresearchers, we can use our work in cognitive science and linguistics to explain why it is all too easy for humans to fall into the cognitive trap of thinking that an entity that can use language fluently is sentient, conscious or intelligent.
Using AI to generate humanlike language
Text generated by models like Google’s LaMDA can be hard to distinguish from text written by humans. This impressive achievement is a result of a decadeslong program to build models that generate grammatical, meaningful language.
The first computer system to engage people in dialogue was psychotherapy software called Eliza, built more than half a century ago.
Early versions dating back to at least the 1950s, known as n-gram models, simply counted up occurrences of specific phrases and used them to guess what words were likely to occur in particular contexts. For instance, it’s easy to know that “peanut butter and jelly” is a more likely phrase than “peanut butter and pineapples.” If you have enough English text, you will see the phrase “peanut butter and jelly” again and again but might never see the phrase “peanut butter and pineapples.”
Today’s models, sets of data and rules that approximate human language, differ from these early attempts in several important ways. First, they are trained on essentially the entire internet. Second, they can learn relationships between words that are far apart, not just words that are neighbors. Third, they are tuned by a huge number of internal “knobs” – so many that it is hard for even the engineers who design them to understand why they generate one sequence of words rather than another.
The models’ task, however, remains the same as in the 1950s: determine which word is likely to come next. Today, they are so good at this task that almost all sentences they generate seem fluid and grammatical.
Peanut butter and pineapples?
We asked a large language model, GPT-3, to complete the sentence “Peanut butter and pineapples___”. It said: “Peanut butter and pineapples are a great combination. The sweet and savory flavors of peanut butter and pineapple complement each other perfectly.” If a person said this, one might infer that they had tried peanut butter and pineapple together, formed an opinion and shared it with the reader.
But how did GPT-3 come up with this paragraph? By generating a word that fit the context we provided. And then another one. And then another one. The model never saw, touched or tasted pineapples – it just processed all the texts on the internet that mention them. And yet reading this paragraph can lead the human mind – even that of a Google engineer – to imagine GPT-3 as an intelligent being that can reason about peanut butter and pineapple dishes.
Large AI language models can engage in fluent conversation. However, they have no overall message to communicate, so their phrases often follow common literary tropes, extracted from the texts they were trained on. For instance, if prompted with the topic “the nature of love,” the model might generate sentences about believing that love conquers all. The human brain primes the viewer to interpret these words as the model’s opinion on the topic, but they are simply a plausible sequence of words.
The human brain is hardwired to infer intentions behind words. Every time you engage in conversation, your mind automatically constructs a mental model of your conversation partner. You then use the words they say to fill in the model with that person’s goals, feelings and beliefs.
The process of jumping from words to the mental model is seamless, getting triggered every time you receive a fully fledged sentence. This cognitive process saves you a lot of time and effort in everyday life, greatly facilitating your social interactions.
However, in the case of AI systems, it misfires – building a mental model out of thin air.
A little more probing can reveal the severity of this misfire. Consider the following prompt: “Peanut butter and feathers taste great together because___”. GPT-3 continued: “Peanut butter and feathers taste great together because they both have a nutty flavor. Peanut butter is also smooth and creamy, which helps to offset the feather’s texture.”
The text in this case is as fluent as our example with pineapples, but this time the model is saying something decidedly less sensible. One begins to suspect that GPT-3 has never actually tried peanut butter and feathers.
Ascribing intelligence to machines, denying it to humans
A sad irony is that the same cognitive bias that makes people ascribe humanity to GPT-3 can cause them to treat actual humans in inhumane ways. Sociocultural linguistics – the study of language in its social and cultural context – shows that assuming an overly tight link between fluent expression and fluent thinking can lead to bias against people who speak differently.
These biases are deeply harmful, often lead to racist and sexist assumptions, and have been shown again and again to be unfounded.
Fluent language alone does not imply humanity
Will AI ever become sentient? This question requires deep consideration, and indeed philosophers have pondered it for decades. What researchers have determined, however, is that you cannot simply trust a language model when it tells you how it feels. Words can be misleading, and it is all too easy to mistake fluent speech for fluent thought.
Ever wondered about the secret to a long life? Perhaps understanding the lifespans of other animals with backbones (or “vertebrates”) might help us unlock this mystery.
You’ve probably heard turtles live a long (and slow) life. At 190 years, Jonathan the Seychelles giant tortoise might be the oldest land animal alive. But why do some animals live longer than others?
Research published today by myself and colleagues in the journal Science investigates the various factors that may affect longevity (lifespan) and ageing in reptiles and amphibians.
We used long-term data from 77 different species of reptiles and amphibians – all cold-blooded animals. Our work is a collaboration between more than 100 scientists with up to 60 years of data on animals that were caught, marked, released and re-caught.
These data were then compared to existing information on warm-blooded animals, and several different ideas about ageing emerged.
What factors might be important?
Cold-blooded or warm-blooded
One popular line of thought we investigated is the idea that cold-blooded animals such as frogs, salamanders and reptiles live longer because they age more slowly.
These animals have to rely on external temperatures to help regulate their body temperature. As a result they have slower “metabolisms” (the rate at which they convert what they eat and drink into energy).
Animals that are small and warm-blooded, such as mice, age quickly since they have faster metabolisms – and turtles age slowly since they have slower metabolisms. By this logic, cold-blooded animals should have lower metabolisms than similar-sized warm-blooded ones.
However, we found cold-blooded animals don’t age more slowly than similar-sized warm-blooded ones. In fact, the variation in ageing in the reptiles and amphibians we looked at was much greater than previously predicted. So the reasons vertebrates age are more complex than this idea sets out.
Environmental temperature
Another related theory is that environmental temperature itself could be a driver for longevity. For instance, animals in colder areas might be processing food more slowly and have periods of inactivity, such as with hibernation – leading to an overall increase in lifespan.
Under this scenario, both cold and warm-blooded animals in colder areas would live longer than animals in warmer areas.
We found this was true for reptiles as a group, but not for amphibians. Importantly, this finding has implications for the effects of global warming, which might lead to reptiles ageing faster in permanently warmer environments.
The Viviparous lizard (Zootoca vivipara) is one of the cold-blooded species we studied.Shutterstock
Protection
One suggestion is that animals with certain types of protections, such as protruding spines, armor, venom or shells, also don’t age as fast and therefore live longer.
A lot of energy is put into producing these protections, which can allow animals to live longer by making them less vulnerable to predation. However, could it be the very fact of having these protections allows animals to age more slowly?
Our work found this to be true. It seems having such protections does lead to animals living longer. This is especially true for turtles, which have hard shell protection and incredibly long lifespans.
We’ll need to conduct more research to figure out why just having protections is linked to a longer life.
One species of crocodile studied, Crocodylus johnsoni, has a powerful armoured body with protruding scales that protect it from predation. Shutterstock
Reproduction
Finally, it has been posited that perhaps longevity is linked to how late into life an animal reproduces.
If they can keep reproducing later into life, then natural selection would drive this ability, generation to generation, allowing these animals to live longer than those that reproduce early and can’t continue to do so.
Indeed, we found animals that start producing offspring at a later age do seem to live longer lives. Sleepy lizards (or shinglebacks) are a great example. They don’t reproduce until they’re about five years old, and live until they’re close to 50!
The challenge in understanding ageing
To understand ageing, we need a lot of data on the same animals. That’s simply because if we want to know how long a species lives, we have to keep catching the same individuals over and over, across large spans of time.
This is “longitudinal” research. Luckily, it’s exactly what some scientists have committed themselves to. It’s also what my team is doing with sleepy lizards, Tiliqua rugosa. These lizards have been studied continuously at Bundey Bore station in the Mid North of South Australia since 1982.
The sleepy lizard is one of the species used in the longevity study. As far as we know, this species lives up to 50 years.(Mike Gardner)
Here, more than 13,000 lizards have been caught over 40 years of study. Some have been caught up to 60 times! But given the 45-year longevity of these lizards, we’ve been studying them for a shorter time than some of them live. By keeping the survey work going we might find they live even longer.
Some animals’ chance of dying isn’t linked to age
Another interesting part of this research was finding, for a range of animals, that their chance of dying is just as small when they’re quite old compared to when they’re young. This “negligible ageing” is found in at least one species across each of frogs, salamanders, lizards, crocodiles and, of course, in tortoises like Jonathon.
We’re not quite sure why this is. The next challenge is to find out – perhaps by analysing species genomes. Knowing some animals have negligible ageing means we can target these species for future investigations.
Understanding what drives long life in other animals might lead to different biomedical targets to study humans too. We might not live to Jonathan the tortoise’s age, but we could theoretically use this knowledge to develop therapies that help stop some of the ageing process in us.
For now, healthy eating and exercising remain surer ways to a longer life.