The United States on Wednesday approved the world's first vaccine for the Respiratory Syncytial Virus (RSV), the culmination of a decades-long hunt to protect vulnerable people from the common illness.
Drugmaker GSK's Arexvy was green-lighted for adults aged 60 and older, with similar shots from other makers including Pfizer and Moderna expected to follow soon.
"Today's approval of the first RSV vaccine is an important public health achievement to prevent a disease which can be life-threatening," said senior US Food and Drug Administration (FDA) official Peter Marks in a statement.
The decision "marks a turning point in our effort to reduce the significant burden of RSV," added Tony Wood, GSK's chief scientific officer.
RSV is a common virus that normally causes mild, cold-like symptoms, but can be serious for infants and the elderly, as well as those with weak immune systems and underlying conditions.
In severe cases it can cause pneumonia and bronchiolitis, an inflammation of the small airways deep inside the lungs.
According to the US Centers for Disease Control and Prevention, RSV leads to approximately 60,000 to 120,000 hospitalizations and 6,000 to 10,000 deaths among adults 65 years of age and older.
Awareness of the disease has increased in recent years, in part because of the strain it has placed on hospital systems over the last two winters.
Rates of RSV and flu fell during Covid-19 lockdowns, but surged when restrictions were lifted, with young children hit hard.
Pharmaceutical companies have been chasing an RSV vaccine for years. Given recent successful breakthroughs in the sector, analysts predict the market could be worth over $10 billion in the next decade, according to reports.
More vaccines on way
GSK's vaccine contains a "subunit" or part of the virus to help train the immune system should it encounter the real thing.
It was approved based on a study of 25,000 people aged 60 and older that showed a single dose was 83 percent effective against disease caused by RSV, and more than 94 percent effective against severe disease.
Researchers will continue to follow volunteers in the study to assess the duration of protection as well as the safety and efficacy of more doses.
The most common side effects included injection site pain, fatigue, muscle pain, headaches and joint stiffness.
An irregular heartbeat was a less common side effect, occurring in 10 participants who received Arexvy and four participants who received placebo.
Safety issues were also found in two other studies of the drug involving approximately 2,500 people aged 60 and up. In one of these studies, two volunteers developed a rare type of inflammation that affects the brain and spinal cord, and one of them died.
In the other study, one participant developed Guillain-Barre syndrome, in which the immune system damages nerve cells, causing muscle weakness and sometimes paralysis.
GSK's Arexvy was recommended for approval last week by the European Union's drug watchdog, the European Medicines Agency, whose positive opinions are normally formally followed by approval from the European Commission.
Pfizer has said that it expects a decision from the FDA in May for its own RSV vaccine, also for those over 60 years old.
In January, Moderna said it hopes its RSV vaccine will be approved and available in time for the Northern Hemisphere's winter later this year.
Several other companies are also developing RSV vaccines.
Last year, the EU approved a preventative antibody treatment against RSV, developed by British-Swedish pharmaceutical firm AstraZeneca and France's Sanofi, which confers temporary protection.
Scientists said Wednesday that they have observed a dying star swallowing a planet for the first time, offering a preview of Earth's expected fate in around five billion years.
But when the Sun finally does engulf Earth, it will cause only a "tiny perturbation" compared to this cosmic explosion, the US astronomers said.
Most planets are believed to meet their end when their host star runs out of energy, turning into a red giant that massively expands, devouring anything unlucky enough to be in its path.
Astronomers had previously seen the before-and-after effects of this process, but had never before caught a planet in the act of being consumed.
Kishalay De, a postdoc researcher at MIT in the United States and the lead author of the new study, said the accidental discovery unfolded like a "detective story".
"It all started about three years ago when I was looking at data from the Zwicky Transient Facility survey, which takes images of the sky every night," De told an online press conference.
He stumbled across a star that had suddenly increased in brightness by more than 100 times over a 10-day period.
The star is in the Milky Way galaxy, around 12,000 light years from Earth near the Aquila constellation, which resembles an eagle.
Ice in boiling water
De had been searching for binary star systems, in which the larger star takes bites out of its companion, creating incredibly bright explosions called outbursts.
But data showed that this outburst was surrounded by cold gas, suggesting it was not a binary star system.
And NASA's infrared space telescope NEOWISE showed that dust had started to shoot out of the area months before the outburst.
More puzzling still was that the outburst produced around 1,000 times less energy than previously observed mergers between stars.
"You ask yourself: what is 1,000 less massive than a star?" De said.
The answer was close to home: Jupiter.
The team of researchers from MIT, Harvard and Caltech established that the swallowed planet was a gas giant with a similar mass to Jupiter, but was so close to its star that it completed an orbit in just one day.
The star, which is quite similar to the Sun, engulfed the planet over a period of around 100 days, starting off by nibbling at its edges, which ejected dust.
The bright explosion occurred in the final 10 days as the planet was totally destroyed when it plunged inside the star.
Miguel Montarges, an astronomer at the Paris Observatory who was not involved in the research, noted that the star was thousands of degrees hotter than the planet.
"It's like putting an ice cube into a boiling pot," he told AFP.
Watching Earth's fate
Morgan MacLeod, a postdoc at Harvard University and co-author of the study, published in the journal Nature, said that most of the thousands of planets discovered outside the Solar System so far "will eventually suffer this fate".
And in comparison, Earth will most likely end not with a bang but a whimper.
When the Sun expands past Mercury, Venus and Earth in an estimated five billion years, they will make "less dramatic disturbances" because rocky planets are so much smaller than gas giants, MacLeod said.
"In fact, they will be really minor perturbations to the power output of the Sun," he said.
But even before it gets swallowed, Earth will already be "quite inhospitable," because the dying Sun will have already evaporated all the planet's water, MacLeod added.
Ryan Lau, an astronomer and study co-author, said the discovery "speaks to the transience of our existence".
"After the billions of years that span the lifetime of our Solar System, our own end stages will likely conclude in a final flash that lasts only a few months," he said in a statement.
Now that astronomers know what to look for, they hope that soon they will be able to watch many more planets be consumed by their stars.
In the Milky Way alone, a planet could be engulfed once a year, De said.
US pharmaceutical giant Eli Lilly on Wednesday announced its experimental Alzheimer's drug significantly slowed cognitive and functional decline, results hailed as "remarkable" by experts despite some patients experiencing serious side effects.
In an analysis of nearly 1,200 people in the early stages of the disease, donanemab slowed the progression of symptoms by 35 percent over a period of 18 months compared to placebo.
This was measured by cognition and their ability to carry out daily tasks like managing finances, driving, engaging in hobbies and conversing about current events in a standardized index called the Integrated Alzheimer's Disease Rating Scale (iADRS).
Side effects included temporary swelling in parts of the brain, which occurred in almost a quarter of the treated patients, as well as microhemorrhages that occurred in 31 percent of patients on the treatment arm and 14 percent of patients in the placebo group.
Two participants' deaths were attributed to the side effects, while a third might have also died from the treatment.
Nonetheless, the data was widely praised by independent experts, who said donanemab had the potential, if approved, to significantly improve the lives of people suffering from the most common form of dementia.
The news comes after the US approved another Alzheimer's drug in January, Biogen and Eisai's lecanemab, which slowed the rate of cognitive decline by 27 percent and was also declared a blockbuster by experts.
Biogen and Eisai had also developed aducanumab, known by the trade Aduhelm, which was given US approval in 2021, though that decision was mired in controversy and led to a damning report by Congress.
In addition to severe side effects, Aduhlem's clinical effectiveness was ambiguous, which is so far not the case for the two subsequent drugs.
Lilly said it would rapidly submit its results to the US Food and Drug Administration (FDA) as well as other global regulators.
"We are extremely pleased that donanemab yielded positive clinical results with compelling statistical significance for people with Alzheimer's disease in this trial," said Daniel Skovronsky, Lilly's chief scientific and medical officer, in a statement.
Mark Mintun, a top Lilly executive in neuroscience R&D, added however that "like many effective treatments for debilitating and fatal diseases, there are associated risks that may be serious and life-threatening."
Eli Lilly's stock price rose 4.3 percent after Wednesday's announcement.
- Targeting amyloid -
In Alzheimer's disease, two key proteins, tau and amyloid beta, build up into tangles and plaques, known together as aggregates, which cause brain cells to die and lead to brain shrinkage.
Like lecanemab (also known by its trade name Leqembi), donanemab is an antibody therapy that targets amyloid beta.
Experts said that the results for both drugs validated the theory that removing amyloid beta does improve the course of the disease, and that future therapies targeting both proteins might have even better outcomes.
Nick Fox, of the UK Dementia Research Institute, said that although the full dataset was not yet available, the results announced by press release "confirms that we are in a new era of disease modification for Alzheimer's disease."
"This clinical trial is a real breakthrough, demonstrating a remarkable 35% slowing of cognitive decline in Alzheimer's patients with high amyloid beta but low tau burden," added Marc Busche, UK Dementia Research Institute group leader at University College London.
"These are the strongest phase 3 data for an Alzheimer's treatment to date," said Maria Carrillo, chief science officer at the US Alzheimer's Association. "This further underscores the inflection point we are at for the Alzheimer's field."
Alzheimer's disease accounts for 60-80 percent of dementia, according to the Alzheimer's Association. It progressively destroys thinking and memory, eventually robbing people of the ability to carry out the simplest of tasks.
The famous first image of a black hole just got two times sharper. A research team used artificial intelligence to dramatically improve upon its first image from 2019, which now shows the black hole at the center of the M87 galaxy as darker and bigger than the first image depicted.
Since then, AI has spread into every field of astronomy. As the technology has become more powerful, AI algorithms have begun helping astronomers tame massive data sets and discover new knowledge about the universe.
Astronomy is no longer limited to just optical images – radio telescopes produce huge amounts of data that researchers need to process. Wenbin/Moment via Getty Images
Better telescopes, more data
As long as astronomy has been a science, it has involved trying to make sense of the multitude of objects in the night sky. That was relatively simple when the only tools were the naked eye or a simple telescope, and all that could be seen were a few thousand stars and a handful of planets.
A hundred years ago, Edwin Hubble used newly built telescopes to show that the universe is filled with not just stars and clouds of gas, but countless galaxies. As telescopes have continued to improve, the sheer number of celestial objects humans can see and the amount of data astronomers need to sort through have both grown exponentially, too.
For example, the soon-to-be-completed Vera Rubin Observatory in Chile will make images so large that it would take 1,500 high-definition TV screens to view each one in its entirety. Over 10 years it is expected to generate 0.5 exabytes of data – about 50,000 times the amount of information held in all of the books contained within the Library of Congress.
There are 20 telescopes with mirrors larger than 20 feet (6 meters) in diameter. AI algorithms are the only way astronomers could ever hope to work through all of the data available to them today. There are a number of ways AI is proving useful in processing this data.
One of the earliest uses of AI in astronomy was to pick out the multitude of faint galaxies hidden in the background of images. ESA/Webb, NASA & CSA, J. Rigby, CC BY
Picking out patterns
Astronomy often involves looking for needles in a haystack. About 99% of the pixels in an astronomical image contain background radiation, light from other sources or the blackness of space – only 1% have the subtle shapes of faint galaxies.
AI algorithms – in particular, neural networks that use many interconnected nodes and are able to learn to recognize patterns – are perfectly suited for picking out the patterns of galaxies. Astronomers began using neural networks to classify galaxies in the early 2010s. Now the algorithms are so effective that they can classify galaxies with an accuracy of 98%.
This story has been repeated in other areas of astronomy. Astronomers working on SETI, the Search for Extraterrestrial Intelligence, use radio telescopes to look for signals from distant civilizations. Early on, radio astronomers scanned charts by eye to look for anomalies that couldn’t be explained. More recently, researchers harnessed 150,000 personal computers and 1.8 million citizen scientists to look for artificial radio signals. Now, researchers are using AI to sift through reams of data much more quickly and thoroughly than people can. This has allowed SETI efforts to cover more ground while also greatly reducing the number of false positive signals.
AI has proved itself to be excellent at identifying known objects – like galaxies or exoplanets – that astronomers tell it to look for. But it is also quite powerful at finding objects or phenomena that are theorized but have not yet been discovered in the real world.
Teams have used this approach to detect new exoplanets, learn about the ancestral stars that led to the formation and growth of the Milky Way, and predict the signatures of new types of gravitational waves.
To do this, astronomers first use AI to convert theoretical models into observational signatures – including realistic levels of noise. They then use machine learning to sharpen the ability of AI to detect the predicted phenomena.
Finally, radio astronomers have also been using AI algorithms to sift through signals that don’t correspond to known phenomena. Recently a team from South Africa found a unique object that may be a remnant of the explosive merging of two supermassive black holes. If this proves to be true, the data will allow a new test of general relativity – Albert Einstein’s description of space-time.
The team that first imaged a black hole, at left, used AI to generate a sharper version of the image, at right, showing the black hole to be larger than originally thought. Medeiros et al 2023, CC BY-ND
Making predictions and plugging holes
As in many areas of life recently, generative AI and large language models like ChatGPT are also making waves in the astronomy world.
The team that created the first image of a black hole in 2019 used a generative AI to produce its new image. To do so, it first taught an AI how to recognize black holes by feeding it simulations of many kinds of black holes. Then, the team used the AI model it had built to fill in gaps in the massive amount of data collected by the radio telescopes on the black hole M87.
Using this simulated data, the team was able to create a new image that is two times sharper than the original and is fully consistent with the predictions of general relativity.
Astronomers are also turning to AI to help tame the complexity of modern research. A team from the Harvard-Smithsonian Center for Astrophysics created a language model called astroBERT to read and organize 15 million scientific papers on astronomy. Another team, based at NASA, has even proposed using AI to prioritize astronomy projects, a process that astronomers engage in every 10 years.
As AI has progressed, it has become an essential tool for astronomers. As telescopes get better, as data sets get larger and as AIs continue to improve, it is likely that this technology will play a central role in future discoveries about the universe.
SAN DIEGO -- What are my odds of dying after swallowing a toothpick? Do I need to see a doctor after hitting my head on a metal bar while running? Am I likely to go blind after getting bleach splashed in my eye? A new study led by researchers at UC San Diego explores how artificial intelligence compares to human expertise in the workaday task of dashing off quick responses to routine medical questions. Published Friday in the medical journal JAMA Internal Medicine, the paper finds that ChatGPT, the world-upending chatbot with a seemingly-infinite breadth of training, was able to more than hold...
Waving lines appear in your field of vision, you have trouble finding words and feel a slight numbness - the sudden arrival of the so-called aura can give you quite a scare. As many as a quarter of migraine sufferers experience this phenomenon. Oliver Killig/dpa
Just as the dawn heralds the start of the day, so too does an aura often signal the onset of a migraine headache. This is why the sensory disturbances that can precede a migraine are named after Aurora, the ancient Roman goddess of the dawn.
But an aura isn't anything like beautifully coloured skies, and what follows isn't pleasant either: a type of recurrent headache that can cause severe throbbing pain, usually on one side of the head. Not the best way to start your day.
"Auras typically develop over 20 to 30 minutes about a hour before the headache phase of a migraine," says Dr Hartmut Göbel, chief physician at the Kiel Migraine, Headache and Pain Centre in Germany. Then they subside and the pain begins.
Fifteen to 25% of migraine patients have them, according to the German Migraine and Headache Society (DMKG).
Visual disturbances are common aura symptoms, such as unilateral visual field loss. This means that "flickering lights may increasing appear on the left or right side of your visual field," Göbel says. Affected persons also often see jagged lines that continue to spread or develop coloured edges.
Blind spots can occur as well, making it difficult to read. Or migraine sufferers may have the impression that they're looking through a veil or through streaks or smears.
"Before a migraine attack, basically every symptom that can be triggered by a disturbed electrical excitability of the cerebral cortex [the outermost layer of tissue in the brain] may occur," Göbel says.
The symptoms can include dizziness, fatigue or a tingling sensation, for example in the hands. Some migraine sufferers have trouble finding words or can no longer concentrate. Epileptic seizures or a loss of consciousness may occur as well.
Migraines, and migraine auras, are caused by a congenital peculiarity of the brain's stimulus processing. "The [patients'] nervous system is constantly under 'high tension,'" Göbel says.
Their brain picks up stimuli earlier and faster than normal, and also processes them faster. If too many stimuli stream in too quickly and too suddenly, nerve cells are strongly activated, which can cause their energy supply to break down. Nerve function gets out of control as a result.
"And since the brain's visual cortex [the part of the cerebral cortex that processes visual signals] needs a particularly high amount of energy, the visual disturbances that are typical of an aura occur," says Göbel.
It's virtually impossible to influence the course of an aura, remarks Dr Charly Gaul, a neurologist at the Frankfurt Headache Centre in Germany. Migraine sufferers familiar with their auras know this.
On the contrary, "a pronounced aura is more likely to force affected persons to interrupt their activities, such as driving, and wait until the symptoms subside," he says.
To better cope with an aura, patients must learn how to deal with migraine attacks, notes Gaul. "They're often afraid that a brain disease could be the cause or that a stroke is imminent," Gaul says. "The best strategy is to inform yourself about auras."
Migraines are associated with a slightly elevated risk of vascular diseases such as stroke or heart attack. The reason is unclear. "The risk doesn't exist during an aura, however - it's generally elevated," Gaul says. "Genetic factors may play a role."
He therefore recommends a healthy lifestyle to lower the risk.
"If you experience a migraine attack for the first time and it's severe or accompanied by untypical symptoms, emergency admission to hospital is advisable so as not to overlook other possible illnesses," he says.
As regards therapeutics for auras, "no medications or household remedies are available for acute treatment," says Gaul. There are, however, medications that patients can take prophylactically to reduce the frequency of migraines - and hence their accompanying auras.
What about triptans? These are a class of prescription drugs known as abortives that can help stop or decrease migraine symptoms by blocking pain pathways in the brain. It's often said they should be taken only when the aura subsides.
"At first it was thought that their mechanism of action derived from constricting blood vessels in the brain," Gaul says. This is why the patient information leaflet says they shouldn't be taken during the aura.
"In fact, however, these medications mainly affect the release of chemical messengers. The associated changes in blood vessel diameter are more of an indirect consequence."
But according to Gaul, some studies suggest that triptans taken during the aura may not have an optimal effect on migraine attacks. "These questions haven't been fully answered though," he says.
What's more, Gaul adds, you've got to take into account that the medication first has to be absorbed before it takes effect. "And by then, the aura is often over already." When it comes to questions like this, he advises consulting a physician.
Feeling drained? We all get like that sometimes. But it's a state of mind that manufacturers of products such as teas, capsules, or juice cures are happy to offer cures for.
"Detox" products promise to do just that, to remove toxins from the body and make us feel fitter and stronger. But is this all just a marketing ploy? Do the products really do anything for us?
Liver, kidneys and lungs detoxify the body
According to Annett Reinke, nutrition specialist at the Consumer Protection Agency in the German state of Brandenberg, the answer is clear: Even the term “detox” itself is misleading.
It promises to detoxify the body, but that’s something the body does anyway. "It does that itself, and quite excellently," agrees Matthias Riedl, a Hamburg-based nutritional physician.
Harmful substances of any kind are broken down by the liver, intestines, kidneys and lungs and then expelled from the body. It is only in the case of acute poisoning, for example due to an overdose of medication, that a doctor has to take countermeasures with an antidote.
It is true that lead from water pipes, for example, can accumulate in fatty tissue. "However, there is no scientific evidence that such a harmful substance can be removed from the body with a detox product," says nutritionist Riedl.
Patches that turn dark
One product that Reinke views with particular scepticism are detox patches. According to the manufacturer’s instructions, the patches should be attached to the soles of the feet in the evening before going to sleep. If the patches turn dark overnight, this is supposed to prove that they have sucked toxins out of the body.
"The darkening could be due to the combination of heat under the covers with moisture," Reinke speculates. But it is not due to detoxification, she says, because there is absolutely no scientific evidence for that.
It is precisely this lack of scientific evidence that ensures that manufacturers are prohibited from using detox promises in the marketing of their products.
At least this applies to food products, according to a 2018 ruling by the German Federal Court of Justice. However: some manufacturers are not complying with this ruling, while others have rewritten the marketing content for their products.
It's important to mention that some detox products do contain quite healthy ingredients such as pumpkin seeds, green tea and healing clay. "They don’t harm the body. However, they also don’t have the promised detoxifying effect,” says Riedl.
Dehydrating products should be avoided
There are also some detox products that are best avoided. Riedl advises against things like detoxifying juice cures, whereby the only nutrition taken for days is juice. These juices, which consist of fruits and vegetables, are far too sugary, and therefore can be a burden for the liver.
Caution is also advised when it comes to dehydrating products: Anyone who consumes these over a longer period, excretes increased amounts of minerals such as sodium, potassium, calcium and magnesium. This can upset the body's electrolyte balance. The result can be cramps or muscle disorders.
So, what can you do if you want to really do something good for your body – but without resorting to patches and powders? The best thing is not to allow so many harmful substances into the body in the first place: "Avoid nicotine and alcohol if possible," advises Reinke.
The nutrition professionals recommend eating plenty of fruit and vegetables. However: "Always wash fruit and vegetables thoroughly before eating them," says Reinke. With leafy vegetables it is advisable to remove the outer leaves as well as the stalk to prevent ingesting heavy metals. Organic produce usually contains fewer harmful substances.
To enable the kidneys to filter out toxins quickly and effectively, you should drink enough liquid - about two to three litres of water and unsweetened teas a day.
According to the manufacturers, if the detox patches change colour overnight, this indicates that toxins have been removed from the body. However, there are indications that it is rather the heat and the sweating under the duvet that cause the patches to darken. Franziska Gabbert/dpa
Green tea - more specifically, matcha - is often a component of detox powders. These are used, for example, as an ingredient in smoothies. Klaus-Dietmar Gabbert/dpa
Removing toxins from the body overnight: that's what detox patches promise. But this has not been scientifically proven. Franziska Gabbert/dpa
WASHINGTON (AFP) - The White House plans to meet with top executives from Google, Microsoft, OpenAI and Anthropic on Thursday to discuss the promise and risks of artificial intelligence.
Vice President Kamala Harris and other U.S. administration officials will discuss ways to ensure consumers benefit from AI while being protected from its harms, according to a copy of an invitation seen by AFP.
President Joe Biden expects tech companies to make sure products are safe before being released to the public, the invitation said.
US regulators last month took a step towards drawing up rules on AI that could see the White House put the brakes on new technologies such as ChatGPT.
The U.S. Department of Commerce put out a call for input from industry actors that would serve to inform the Biden administration in drafting regulation on AI.
"Just as food and cars are not released into the market without proper assurance of safety, so too AI systems should provide assurance to the public, government, and businesses that they are fit for purpose," the Commerce Department said in a statement at the time.
The United States is home to the biggest innovators in tech and AI -- including Microsoft-backed OpenAI, which created ChatGPT — but trails internationally in regulating the industry.
Google in March invited users in the United States and Britain to test its AI chatbot, known as Bard, as it continues on its gradual path to catch up with ChatGPT.
Biden has urged Congress to pass laws putting stricter limits on the tech sector, but these efforts have little chance of making headway given political divisions among lawmakers.
The lack of rules has given Silicon Valley freedom to put out new products rapidly — and stoked fears that AI technologies will wreak havoc on society before the government can catch up.
Billionaire Elon Musk in early March formed an AI company called X.AI, based in Nevada, according to business documents.
Musk, who is already the boss of Twitter and Tesla, is listed as director of X.AI Corporation, a state business filing indicated.
Musk's founding of what appears to be a rival to OpenAI came despite him recently joining tech leaders and AI critics in calling for an overall pause in the development of artificial intelligence.
Google, Meta and Microsoft have spent years working on AI systems to help with translations, internet searches, security and targeted advertising.
But late last year San Francisco firm OpenAI supercharged the interest in the AI sphere when it launched ChatGPT, a bot that can generate natural-seeming text responses from short prompts.
The technology to decode our thoughts is drawing ever closer. Neuroscientists at the University of Texas have for the first time decoded data from non-invasive brain scans and used them to reconstruct language and meaning from stories that people hear, see or even imagine.
Technology that can create language from brain signals could be enormously useful for people who cannot speak due to conditions such as motor neurone disease. At the same time, it raises concerns for the future privacy of our thoughts.
Language decoded
Language decoding models, also called “speech decoders”, aim to use recordings of a person’s brain activity to discover the words they hear, imagine or say.
Until now, speech decoders have only been used with data from devices surgically implanted in the brain, which limits their usefulness. Other decoders which used non-invasive brain activity recordings have been able to decode single words or short phrases, but not continuous language.
The new research used the blood oxygen level dependent signal from fMRI scans, which shows changes in blood flow and oxygenation levels in different parts of the brain. By focusing on patterns of activity in brain regions and networks that process language, the researchers found their decoder could be trained to reconstruct continuous language (including some specific words and the general meaning of sentences).
Specifically, the decoder took the brain responses of three participants as they listened to stories, and generated sequences of words that were likely to have produced those brain responses. These word sequences did well at capturing the general gist of the stories, and in some cases included exact words and phrases.
The researchers also had the participants watch silent movies and imagine stories while being scanned. In both cases, the decoder often managed to predict the gist of the stories.
For example, one user thought “I don’t have my driver’s license yet”, and the decoder predicted “she has not even started to learn to drive yet”.
Further, when participants actively listened to one story while ignoring another story played simultaneously, the decoder could identify the meaning of the story being actively listened to.
How does it work?
The researchers started out by having each participant lie inside an fMRI scanner and listen to 16 hours of narrated stories while their brain responses were recorded.
These brain responses were then used to train an encoder – a computational model that tries to predict how the brain will respond to words a user hears. After training, the encoder could quite accurately predict how each participant’s brain signals would respond to hearing a given string of words.
However, going in the opposite direction – from recorded brain responses to words – is trickier.
The encoder model is designed to link brain responses with “semantic features” or the broad meanings of words and sentences. To do this, the system uses the original GPT language model, which is the precursor of today’s GPT-4 model. The decoder then generates sequences of words that might have produced the observed brain responses.
The accuracy of each “guess” is then checked by using it to predict previously recorded brain activity, with the prediction then compared to the actual recorded activity.
During this resource-intensive process, multiple guesses are generated at a time, and ranked in order of accuracy. Poor guesses are discarded and good ones kept. The process continues by guessing the next word in the sequence, and so on until the most accurate sequence is determined.
Words and meanings
The study found data from multiple, specific brain regions – including the speech network, the parietal-temporal-occipital association region, and prefrontal cortex – were needed for the most accurate predictions.
One key difference between this work and earlier efforts is the data being decoded. Most decoding systems link brain data to motor features or activity recorded from brain regions involved in the last step of speech output, the movement of the mouth and tongue. This decoder works instead at the level of ideas and meanings.
One limitation of using fMRI data is its low “temporal resolution”. The blood oxygen level dependent signal rises and falls over approximately a 10-second period, during which time a person might have heard 20 or more words. As a result, this technique cannot detect individual words, but only the potential meanings of sequences of words.
No need for privacy panic (yet)
The idea of technology that can “read minds” raises concerns over mental privacy. The researchers conducted additional experiments to address some of these concerns.
These experiments showed we don’t need to worry just yet about having our thoughts decoded while we walk down the street, or indeed without our extensive cooperation.
A decoder trained on one person’s thoughts performed poorly when predicting the semantic detail from another participant’s data. What’s more, participants could disrupt the decoding by diverting their attention to a different task such as naming animals or telling a different story.
Movement in the scanner can also disrupt the decoder as fMRI is highly sensitive to motion, so participant cooperation is essential. Considering these requirements, and the need for high-powered computational resources, it is highly unlikely that someone’s thoughts could be decoded against their will at this stage.
Finally, the decoder does not currently work on data other than fMRI, which is an expensive and often impractical procedure. The group plans to test their approach on other non-invasive brain data in the future.
Homo sapiens, our own species, evolved in Africa sometime between 300,000 and 200,000 years ago. Anthropologists are pretty confident in that estimate, based on fossil, genetic and archaeological evidence.
Then what happened? How modern humans spread throughout the rest of the world is one of the most active areas of research in human evolutionary studies.
The earliest fossil evidence of our species outside of Africa is found at a site called Misliya cave, in the Middle East, and dates to around 185,000 years ago. While additional H. sapiens fossils are found from around 120,000 years ago in this same region, it seems modern humans reached Europe much later.
Understanding when our species migrated out of Africa can reveal insights into present-day biological, behavioral and cultural diversity. While we Homo sapiens are the only humans alive today, our species coexisted with different human lineages in the past, including Neandertals and Denisovans. Scientists are interested in when and where H. sapiens encountered these other kinds of humans.
The first documented discoveries of human fossils were in Europe, just before Darwin’s 1859 publication of “The Origin of Species.” Ideas of evolution were being actively debated within European universities and scientific societies.
Many of the earliest fossil findings were Neandertals, a species that evolved in Europe by 250,000 years ago and became extinct around 40,000 years ago. They are also our closest evolutionary relatives and, because of ancient interbreeding, the genomes of people today include Neandertal DNA. Because of their early historical presence, Neandertal fossils had a big influence on how early researchers thought about human evolution.
The first fossil evidence of Neandertals was found in 1856 during quarrying activities from the Neander Tal (Neander Valley) in Germany. Paleontologists took the hint and started to search for human fossils in other caves and exposed areas that preserved ancient sediments.
More than a decade later, in 1868, paleontologists uncovered H. sapiens fossils at the site of Cro-Magnon in southern France. For much of the 20th century, the 30,000-year-old Cro-Magnon fossils represented the earliest fossil evidence of our species in Europe.
More recently, evidence for an earlier H. sapiens presence in Europe has come from two sites in Eastern Europe, including a partial skull from Zlatý kůň Cave in Czechia dating to 45,000 years ago, as well as more fragmentary remains from Bacho Kiro Cave in Bulgaria dating to around 44,000 years ago. Ancient DNA analysis has confirmed that the fossils from these sites represent H. sapiens. Additional, potentially earlier, evidence is represented by a single tooth dating to 54,000 years ago from the Grotte Mandrin Cave in France.
This is where the human fossil from Banyoles comes into the story.
A new look at an old fossil find potentially pushes back the date when Homo sapiens lived in Europe.
Reinvestigating a ‘Neandertal’ mandible
Over a century ago in 1889, a fossil human lower jaw, or mandible, was found at a quarry near the town of Banyoles, in northeastern Spain. Pere Alsius, a prominent local pharmacist, first studied the mandible, and the fossil has been curated by his family ever since.
A number of anthropologists have studied the fossil over time, but it has not usually been included in discussions about H. sapiens in Europe. Most researchers instead argued it represented a Neandertal or showed Neandertal-like features, in part because the Banyoles fossil lacks a feature considered typical and diagnostic of our own species: a bony chin on the front of the mandible.
Researchers did not have a good idea of how old the Banyoles mandible was, with most believing it likely dated to the Middle Pleistocene (780,000-130,000 years ago). That age made it seem too old to represent H. sapiens. Thus, with the absence of a chin and the presumed early date, the designation as a Neandertal seemed to make sense.
Map of the Iberian Peninsula indicating where the Banyoles mandible (yellow star) was found, along with Late Pleistocene Neandertal (orange triangles) and H. sapiens (white squares) sites. Brian A. Keeling
Based on recent modern uranium-series and electron spin resonance dating, researchers now believe the Banyoles mandible is between 45,000 and 66,000 years old. This younger estimate overlaps with the early H. sapiens fossils from Eastern Europe.
Working with Spanish paleoanthropologists and archaeologists, we took another look at what species the fossil might represent. We relied on a CT scan to virtually reconstruct damaged or missing portions of the mandible and generated a 3D model of the complete fossil. Then, we studied its overall shape and distinctive anatomical features, comparing it to H. sapiens, Neandertals and other earlier human species.
Virtual reconstruction of the 3D model of the Banyoles mandible. Highlighted piece in blue indicates a mirrored element that researchers used to fill out missing sections. Brian A. Keeling
In contrast to earlier analyses, our results revealed that the Banyoles jawbone was most similar to H. sapiens fossils – not Neandertals.
When we examined the mandible’s bony features where muscle tendons and ligaments would have attached, it most closely resembled H. sapiens. We also found no unique bony features shared with the Neandertals. Additionally, when we used sophisticated 3D analysis techniques, we found that Banyoles’ overall shape was a better match with H. sapiens than with Neandertal individuals.
While nearly all of our evidence suggests this prehistoric human was indeed a member of our species, the lack of a chin remains puzzling. This feature is present in all human populations today and should be present in Banyoles if it is a member of our species.
Figuring out the closest match
How do we reconcile our results showing that Banyoles is a modern human with the fact that it lacks one of the most distinctive modern human features? We considered several possible scenarios.
When the mandible was discovered, it was still encased in a hard travertine block and only partially exposed. During initial cleaning and preparation of the specimen, it was accidentally dropped and the chin region was damaged. The fossil was subsequently reconstructed, with the damaged fragments aligned in their correct anatomical position, and the current state of the fossil does seem to accurately reflect an original chinless shape. Thus, the lack of a chin in Banyoles cannot be attributed to this initial incident.
Could the lack of a chin in the Banyoles fossil be a result of interbreeding with Neandertals, who also lacked a chin? Genetic evidence suggests that H. sapiens most likely interbred with Neandertals between 45,000 and 65,000 years ago, making this a possibility.
To assess this hypothesis, we compared Banyoles with an early H. sapiens mandible dating to about 42,000 years ago from a Romanian site called Peştera cu Oase. Ancient DNA analysis has revealed that the Oase individual had a Neandertal ancestor between four and six generations back, making it close to a hybrid individual. However, unlike Banyoles, this mandible shows a full chin along with some other Neandertal features. Since Banyoles shared no distinctive features with Neandertals, we ruled out the possibility of this individual representing interbreeding between Neandertals and H. sapiens.
Comparison of mandibles between H. sapiens, at left; Banyoles, center; and a Neandertal, at right. Brian A. Keeling
We’re left with two possibilities. Banyoles may represent a hybrid individual between H. sapiens and a non-Neandertal archaic human lineage. This scenario might account for the absence of the chin as well as the lack of any other Neandertal features in Banyoles. However, scientists haven’t identified any such non-Neandertal archaic group in the fossil record of the European Late Pleistocene (129,000-11,700 years ago), making this hypothesis less likely.
Alternatively, Banyoles may document a previously unknown lineage of largely chinless H. sapiens in Europe. Possible support for this hypothesis comes from the fact that early H. sapiens fossils from Africa and the Middle East show a less prominent chin than do living humans.
Additionally, ancient DNA research has shown that H. sapiens populations in Europe before 35,000 years ago did not contribute to the modern European gene pool. Thus, we believe the least likely hypothesis is that Banyoles represents an individual from one of these early H. sapiens populations.
Our study of Banyoles demonstrates how new discoveries about our evolutionary past do not solely rely on new fossil discoveries, but can also come about throughly applying new methodologies to previously discovered fossils. If Banyoles is really a member of our species, it would potentially represent the earliest H. sapiens lineage documented to date in Europe. Future ancient DNA analysis could confirm or refute this surprising result. In the meantime, the 3D model of Banyoles is available for other researchers to study and form their own conclusions.
A computer scientist often dubbed "the godfather of artificial intelligence" has quit his job at Google to speak out about the dangers of the technology, US media reported Monday.
Geoffrey Hinton, who created a foundation technology for AI systems, told The New York Times that advancements made in the field posed "profound risks to society and humanity".
"Look at how it was five years ago and how it is now," he was quoted as saying in the piece, which was published on Monday.
"Take the difference and propagate it forwards. That's scary."
Hinton said that competition between tech giants was pushing companies to release new AI technologies at dangerous speeds, risking jobs and spreading misinformation.
"It is hard to see how you can prevent the bad actors from using it for bad things," he told the Times.
In 2022, Google and OpenAI -- the start-up behind the popular AI chatbot ChatGPT -- started building systems using much larger amounts of data than before.
Hinton told the Times he believed that these systems were eclipsing human intelligence in some ways because of the amount of data they were analyzing.
"Maybe what is going on in these systems is actually a lot better than what is going on in the brain," he told the paper.
While AI has been used to support human workers, the rapid expansion of chatbots like ChatGPT could put jobs at risk.
AI "takes away the drudge work" but "might take away more than that", he told the Times.
The scientist also warned about the potential spread of misinformation created by AI, telling the Times that the average person will "not be able to know what is true anymore."
Hinton notified Google of his resignation last month, the Times reported.
Jeff Dean, lead scientist for Google AI, thanked Hinton in a statement to US media.
"As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI," the statement added.
"We're continually learning to understand emerging risks while also innovating boldly."
In March, tech billionaire Elon Musk and a range of experts called for a pause in the development of AI systems to allow time to make sure they are safe.
An open letter, signed by more than 1,000 people including Musk and Apple co-founder Steve Wozniak, was prompted by the release of GPT-4, a much more powerful version of the technology used by ChatGPT.
Hinton did not sign that letter at the time, but told The New York Times that scientists should not "scale this up more until they have understood whether they can control it."
Most of the troubles plaguing the subtropical waters of Florida and the Caribbean revolve around disappearing marine life: coral reefs, fish populations, sea grass beds. It’s decidedly the opposite case with sargassum, the floating brown seaweed that has exploded in record-setting mass throughout the region. Nothing can stop the stinky brown mats from carpeting beaches and shorelines through this summer: Sargassum quantities hit record levels in the Caribbean in April, according to researchers at the University of South Florida, and the scientists wrote in a May 1 report that sargassum totals ...
Cancer is an evolutionary disease. The same forces that turned dinosaurs into birds turn normal cells into cancer: genetic mutations and traits that confer a survival advantage.
Evolution in animals is largely driven by mutations in the DNA of germ cells – the sperm and egg that fuse to form an embryo. These mutations may confer traits that differ from those of the offspring’s parents such as larger paws, sharper teeth or lighter hair color. If the change is beneficial, like a mutation that lightens the hair of a rabbit living in a snowy climate, the animal is better able to survive, mate and pass on its mutated gene to the next generation. Such changes accumulate over millions of years, eventually turning, for example, dinosaurs into bluebirds.
Evolution is natural selection of particularly advantageous traits over time.
Cancer arises by these same evolutionary pressures, but at the level of individual cells within a person’s body. Instead of animals fighting for survival in a harsh environment, cells compete for space and nutrients. Because different organs are composed of different kinds of cells, cancers arising from different organs differ from one another in appearance and behavior and in how well they respond to treatment.
We are a team of oncologists, pathologists and translational scientists who work together to study how cancers evolve. We believe that understanding evolution is key to understanding how cancer arises and how to treat it.
Timing is of the essence
Human cells are normally in a constant state of death and renewal. Old cells die and are replaced by new ones. These phases of death and renewal are usually orderly, with cells cooperating in a complex process that provides them with proper nutrition and replaces them at a constant rate, maximizing the overall function of the organ they make up.
Mutations disrupt this orderly process. Changes to the cell’s DNA alter the proteins that comprise the cell’s structure and govern its behavior, sometimes in ways that lead it to duplicate itself faster than its neighbors, resist normal death signals and sequester nutrients for itself.
The immune system attacks and kills mutant cells in most cases. However, if one survives and duplicates itself many times over, it can form a tumor made of multiple mutant cells. These tumor cells continue to reproduce and mutate, evolving until the tumor ultimately gains the ability to spread throughout the body.
Cancer detected at the earliest stages of this evolution can be treated more effectively than cancer at more advanced stages. This observation underlies the effectiveness of cancer screening programs in reducing cancer rates.
For example, colon cancer begins as a polyp, a small tumor on the interior surface of the colon that is harmless on its own but may eventually evolve and gain the ability to invade the colon wall and spread throughout the body. Precancerous polyps are easily removed during colonoscopy screenings, preventing them from evolving to invasive colon cancer.
Different cancers require different treatments
In general, cancers from different organs look distinct from one another and contain different proteins. This leads to variations in how they behave.
Under the microscope, cancer looks like a distorted and disorganized version of the normal tissue from which it arose. Cancer cells tend to contain the same set of proteins as those in healthy organs, and likewise continue to perform many of the same functions. For example, prostate cancer contains large amounts of androgen receptors, proteins that bind to testosterone and drives cells to grow and survive. Androgen receptors both enable normal prostate function and drive growth of prostate cancer.
Tumors arising in a given organ also tend to have mutations in the same set of genes, even among different patients. For example, around half of patients with melanoma, an aggressive type of skin cancer, have a mutation in the BRAF gene that enhances cell growth and survival. In contrast, BRAF mutations are rare in lung cancer.
Pathologists look at tissue samples under a microscope to identify cancer cells.
Cancers also differ in the number of mutations they contain, and this number is strongly associated with the organ from which they arise. The prevalence of mutations is also influenced by mutations in genes that control DNA repair. For example, thyroid cancers typically have a low number of mutations while colon cancers have many mutations, a number that is increased dramatically in tumors that have lost genes involved in DNA repair.
Because of these substantial differences in proteins and mutations, tumors from different organs respond differently to treatment. For example, the majority of patients with testicular cancer can be cured with traditional chemotherapy combined with surgery. However, thyroid cancer and melanoma respond minimally to chemotherapy and require different approaches. Radioactive iodine can only be used to treat thyroid cancer because only thyroid cells take up iodine as part of their usual function.
Tumors that contain a large number of mutations often respond well to immunotherapies that help the patient’s immune system attack cancer cells. This is because the immune system sees tumors with more mutations as more foreign and thus mounts a greater response against them. For example, melanoma and bladder and lung cancers respond well to immunotherapy, particularly those that have lost DNA repair function. In contrast, prostate cancer, which often harbors a low number of mutations, has typically responded poorly to immunotherapies.
Treatments can drive cancer evolution
Treatment can also push cancer to evolve further, gaining advantageous mutations that help them survive and resist therapy.
For example, a subset of lung cancers is driven by mutation in a gene called EGFR. These are treated with a group of drugs that block the protein the mutant EGFR gene encodes for, slowing the cancer’s growth. Lung cancers treated with these drugs often develop a new EGFR mutation called T790M that confers resistance to most EGFR inhibitors. However, researchers have developed another drug that inhibits proteins with T790M and other EGFR mutations more broadly, improving survival for patients with these types of lung cancers.
Cancer cells can adapt to treatments and become resistant to them.
Similarly, metastatic prostate cancer is often treated with drugs that block androgen receptors, because it depends on them for growth and survival. Over time, the tumors evolve in response to these drugs and develop mutations that change the androgen receptor, massively increase the amount of androgen receptor they produce or, in some cases, completely change their appearance and protein content so they no longer rely on androgen receptors to survive. In these instances, patients require different therapies to overcome resistance.
Not an easy fight
The fight against cancer is a fight against evolution, the fundamental process that has driven life on Earth since time immemorial. This is not an easy fight, but medicine has made tremendous progress.
Deaths from cancer in the U.S. have declined since the early 1990s. Much of this is attributable to cancer screening programs and recently developed, more effective drugs. The U.S. Food and Drug Administration approved 332 new drug treatments for cancer between 2009 and 2020. More new drugs are on the way.