A wave of unusually extreme heat at the end of South America's winter was made 100 times more likely by climate change, according to a study published Tuesday.
"While many people have pointed to El Nino to explain the South America heat wave, this analysis has shown that climate change is the primary driver of the heat," said Lincoln Muniz Alves, a researcher at the Brazil National Institute for Space Research who participated in the study by the World Weather Attribution (WWA) group.
From Buenos Aires to Chile, and parts of Brazil, people found themselves in T-shirts at the height of the Southern Hemisphere winter in August and September, with temperatures shooting above 25 degrees Celsius (77 Fahrenheit), and reaching 37 degrees Celsius in some cases, breaking records.
The WWA study found that while the naturally occurring El Nino warming phenomenon had some impact, climate change was the main culprit, driving temperatures up between 1.4 and 4.3 degrees.
A team of 12 experts studied the link between the extreme weather and climate change over the 10 hottest winter days in a region encompassing Paraguay, central Brazil and regions of Bolivia and Argentina.
"The scientists found that these extreme heat episodes in South America outside the summer months would have been extremely unlikely without human-caused climate change," read a statement on the study.
"Heat episodes like these will become even more frequent and extreme if greenhouse gas emissions are not rapidly reduced to net zero."
In Sao Paulo, the biggest city in Latin America, four deaths were attributed to the heatwave.
"Heat kills, particularly in spring, before people are acclimatized to it," said Julie Arrighi, a director at the Red Cross Red Crescent Climate Centre.
"Temperatures above 40 degrees Celsius in early spring are incredibly extreme and while we are aware of just four heat-related fatalities, it's likely the true number is much higher."
The heatwave came during a winter and early spring marked by extreme weather phenomena, from torrential rains in Chile, to cyclones in southern Brazil and a drought which pushed Uruguay's potable water supply to the brink.
The heat also led to increased forest fires in the Amazon.
Unseasonable warm weather was not limited to South America, with last month dubbed the hottest September on record by the European Union climate monitor.
The El Nino phenomenon -- which warms waters in the southern Pacific and stokes hotter weather beyond -- is likely to contribute to 2023 becoming the hottest year on record in the next three months.
Cancer arises when cells accumulate enough damage to change their normal behavior. The likelihood of accruing damage increases with age because the safeguards in your genetic code that ensure cells function for the greater good of the body weaken over time.
Why, then, do children who haven’t had sufficient time to accumulate damage develop cancer?
I am a doctoral student who is exploring the evolutionary origins of cancer. Viewed through an evolutionary lens, cancer develops from the breakdown of the cellular collaboration that initially enabled cells to come together and function as one organism.
Cells in children are still learning how to collaborate. Pediatric cancer develops when rogue cells that defy cooperation emerge and grow at the body’s expense.
Adult versus pediatric cancer
The cells in your body adhere to a set of instructions defined by their genetic makeup – a unique code that carries all the information that cells need to perform their specific function. When cells divide, the genetic code is copied and passed from one cell to another. Copying errors can occur in this process and contribute to the development of cancer.
In adults, cancer evolves through a gradual accrual of errors and damages in the genetic code. Although there are safeguards against uncontrolled cell growth and repair mechanisms to fix genetic errors, aging, exposure to environmental toxins and unhealthy lifestyle can weaken these protections and lead to the breakdown of tissues. The most common types of adult cancers, such as breast cancer and lung cancer, often result from such accumulated damage.
In children, whose tissues are still developing, there is a dual dynamic between growth and cancer prevention. On one hand, rapidly dividing cells are organizing themselves into tissues in an environment with limited immune surveillance – an ideal setting for cancer development. On the other hand, children have robust safeguards and tightly regulated mechanisms that act as counterforces against cancer and make it a rare occurrence.
Although pediatric cancer is rare, it is a leading cause of death for children under 15 in the U.S. FatCamera/E+ via Getty Images
Children seldom accumulate errors in their genetic code, and pediatric cancer patients have a much lower incidence of genetic errors than adult cancer patients. However, nearly 10%of pediatric cancer cases in the U.S. are due to inherited genetic mutations. The most common heritable cancers arise from genetic errors that influence cell fate – that is, what a cell becomes – during the developmental stages before birth. Mistakes in embryonic cells accumulate in all subsequent cells after birth and can ultimately manifest as cancer.
Pediatric cancers can also spontaneously arise while children are growing. These are driven by genetic alterations distinct from those common in adults. Unlike in adults, where damage typically accumulates as small errors during cell division, pediatric cancers often result from large-scale rearrangements of the genetic code. Different regions of the genetic code swap places, disrupting the cell’s instructions beyond repair.
Such changes frequently occur in tissues with constant turnover, such as the brain, muscles and blood. Unsurprisingly, the most prevalent pediatric cancers often emerge from these tissues.
Genetic alterations are not a prerequisite for pediatric cancers. In certain pediatric brain cancers, the region of the genetic code responsible for cell specialization becomes permanently silenced. Although there is no error in the genetic code itself, the cell is unable to read it. Consequently, these cells become trapped in an uncontrolled state of division, ultimately leading to cancer.
Tailoring treatments for pediatric cancer
Cells in children typically exhibit greater growth, mobility and flexibility. This means that pediatric cancer is often more invasive and aggressive than that of adults, and can severely affect development even after successful therapy due to long-term damage. Because the cancer trajectories in children and adults are markedly different, treatment approaches should also be different for each.
Standard cancer therapy includes radiotherapy or chemotherapy, which affect both cancerous and healthy, actively dividing cells. If the patient becomes unresponsive to these treatments, oncologists try a different drug.
In children, the side effects of certain treatments are amplified since their cells are actively growing. Unlike adult cancers, where different drugs can target different genetic errors, pediatric cancers have fewer of these targets. The rarity of pediatric cancer also makes it challenging to test new therapies in large-scale clinical trials.
Standard cancer treatments can lead to lifelong effects for pediatric patients.
A common reason for treatment failure is when cancer cells adapt to evade treatment and become drug resistant. Applying principles from evolutionary biology to cancer treatment can help tackle this.
For example, extinction therapy is an approach to treatment inspired by natural mass extinction events. The goal of this therapy is to eradicate all cancer cells before they can evolve. It does this by applying a “first strike” drug that kills most cancer cells. The remaining few cancer cells are then targeted through focused, smaller-scale interventions.
If complete extinction is not possible, the goal turns to preventing treatment resistance and keeping the tumor from progressing. This can be achieved with adaptive therapy, which takes advantage of the competition for survival among cancer cells. Treatment is dynamically turned “on” and “off” to keep the tumor stable while allowing cells that are sensitive to the therapy to out-compete and suppress resistant cells. This approach preserves the tissue and improves survival.
Although pediatric cancer patients have a better prognosis than adults do after treatment, cancer remains the second-leading cause of death in children under 15 in the U.S. Recognizing the developmental differences between pediatric and adult cancers and using evolutionary theory to “anticipate and steer” the cancer’s trajectory can enhance outcomes for children. This could ultimately improve young patients’ chances for a brighter, cancer-free future.
Centenarians, once considered rare, have become commonplace. Indeed, they are the fastest-growing demographic group of the world’s population, with numbers roughly doubling every ten years since the 1970s.
How long humans can live, and what determines a long and healthy life, have been of interest for as long as we know. Plato and Aristotle discussed and wrote about the ageing process over 2,300 years ago.
The pursuit of understanding the secrets behind exceptional longevity isn’t easy, however. It involves unravelling the complex interplay of genetic predisposition and lifestyle factors and how they interact throughout a person’s life. Now our recent study, published in GeroScience,, has unveiled some common biomarkers, including levels of cholesterol and glucose, in people who live past 90.
Nonagenarians and centenarians have long been of intense interest to scientists as they may help us understand how to live longer, and perhaps also how to age in better health. So far, studies of centenarians have often been small scale and focused on a selected group, for example, excluding centenarians who live in care homes.
Huge dataset
Ours is the largest study comparing biomarker profiles measured throughout life among exceptionally long-lived people and their shorter-lived peers to date.
We compared the biomarker profiles of people who went on to live past the age of 100, and their shorter-lived peers, and investigated the link between the profiles and the chance of becoming a centenarian.
Our research included data from 44,000 Swedes who underwent health assessments at ages 64-99 - they were a sample of the so-called Amoris cohort. These participants were then followed through Swedish register data for up to 35 years. Of these people, 1,224, or 2.7%, lived to be 100 years old. The vast majority (85%) of the centenarians were female.
Twelve blood-based biomarkers related to inflammation, metabolism, liver and kidney function, as well as potential malnutrition and anaemia, were included. All of these have been associated with ageing or mortality in previous studies.
The biomarker related to inflammation was uric acid – a waste product in the body caused by the digestion of certain foods. We also looked at markers linked to metabolic status and function including total cholesterol and glucose, and ones related to liver function, such as alanine aminotransferase (Alat), aspartate aminotransferase (Asat), albumin, gamma-glutamyl transferase (GGT), alkaline phosphatase (Alp) and lactate dehydrogenase (LD).
We also looked at creatinine, which is linked to kidney function, and iron and total iron-binding capacity (TIBC), which is linked to anaemia. Finally, we also investigated albumin, a biomarker associated with nutrition.
Findings
We found that, on the whole, those who made it to their hundredth birthday tended to have lower levels of glucose, creatinine and uric acid from their sixties onwards. Although the median values didn’t differ significantly between centenarians and non-centenarians for most biomarkers, centenarians seldom displayed extremely high or low values.
For example, very few of the centenarians had a glucose level above 6.5 earlier in life, or a creatinine level above 125.
Villagrande Strisaili in the Ogliastra Province of Sardinia, Italy, has the world’s highest population of centenarian men. Sabino Parente/Shutterstock
For many of the biomarkers, both centenarians and non-centenarians had values outside of the range considered normal in clinical guidelines. This is probably because these guidelines are set based on a younger and healthier population.
When exploring which biomarkers were linked to the likelihood of reaching 100, we found that all but two (alat and albumin) of the 12 biomarkers showed a connection to the likelihood of turning 100. This was even after accounting for age, sex and disease burden.
The people in the lowest out of five groups for levels of total cholesterol and iron had a lower chance of reaching 100 years as compared to those with higher levels. Meanwhile, people with higher levels of glucose, creatinine, uric acid and markers for liver function also decreased the chance of becoming a centenarian.
In absolute terms, the differences were rather small for some of the biomarkers, while for others the differences were somewhat more substantial.
For uric acid, for instance, the absolute difference was 2.5 percentage points. This means that people in the group with the lowest uric acid had a 4% chance of turning 100 while in the group with the highest uric acid levels only 1.5% made it to age 100.
Even if the differences we discovered were overall rather small, they suggest a potential link between metabolic health, nutrition and exceptional longevity.
The study, however, does not allow any conclusions about which lifestyle factors or genes are responsible for the biomarker values. However, it is reasonable to think that factors such as nutrition and alcohol intake play a role. Keeping track of your kidney and liver values, as well as glucose and uric acid as you get older, is probably not a bad idea.
That said, chance probably plays a role at some point in reaching an exceptional age. But the fact that differences in biomarkers could be observed a long time before death suggests that genes and lifestyle may also play a role.
The Russian segment of the International Space Station (ISS) sprung its third coolant leak in under a year Monday, raising new questions about the reliability of the country's space program even as officials said crew members were not in danger.
Flakes of frozen coolant spraying into space were seen in an official live feed of the orbital lab provided by NASA around 1:30 pm Eastern Time (1730 GMT), and confirmed in radio chatter between US mission control and astronauts.
"The Nauka module of the Russian segment of the ISS has suffered a coolant leak from the external (backup) radiator circuit, which was delivered to the station in 2012," Russian space agency Roscosmos said on Telegram, adding temperatures remained normal in the affected unit.
"Nothing is threatening the crew and the station," added the statement.
Nauka, which means "science" in Russian and is also known as the Multipurpose Laboratory Module-Upgrade (MLM), launched in 2021.
US mission control in Houston could be heard asking astronauts on the American side to investigate.
"Hi, we're seeing flakes outside, we need a crew to go to the cupola, we think windows five or six, and confirm any visual flakes," an official said to the astronauts.
"There's a leak coming from the radiator on MLM," NASA astronaut Jasmin Moghbeli replied a little later.
NASA later confirmed the events in a statement Monday, saying that "the crew aboard (the) station was never in any danger," and that the leak was coming from Nauka's backup radiator.
"The primary radiator on Nauka is working normally, providing full cooling to the module with no impacts to the crew or to space station operations," NASA said, adding that the crew "was asked to close the shutters on US segment windows as a precaution against contamination."
- 'Something systematic' -
This is the third coolant leak to hit the Russian side of the ISS in less than a year.
On December 15, 2022, dramatic NASA TV images showed white particles resembling snowflakes streaming out of the rear of a docked Soyuz MS-22 spacecraft for several hours.
Speculation about the cause centered on an unlucky strike by a tiny space rock, or micro meteorite.
That spaceship returned to Earth uncrewed, and then another uncrewed Soyuz was sent to replace it a few months later. Two Russians and an American crew had to stay for a year-long mission as a result, returning home only last month.
A similar leak in mid-February also affected the Russian Progress MS-21 cargo ship, which had been docked to the ISS since October 2022.
The succession of leaks lowers the probability they were caused by meteorites.
Space analyst Jonathan McDowell told AFP: "You've got three coolant systems leaking -- there's a common thread there. One is whatever, two is a coincidence, three is something systematic," he said, speculating that a subcontractor company may be at fault.
"It really just emphasises the degrading reliability of Russian space systems. When you add it to the context of their failed Moon probe in August, they're not looking great."
The Russian space sector, which has historically been the pride of the country, has been facing difficulties for years, between lack of funding, failures and corruption scandals.
The ISS is one of the few areas of cooperation still ongoing between Moscow and Washington since the start of the Russian offensive in Ukraine and the international sanctions that followed.
Engaging in sexual activity and experiencing sexual pleasure might have a significant impact on cognitive function among older adults, according to a recent study published in The Journal of Sex Research. The findings suggest that addressing sexual well-being may be a crucial factor in promoting cognitive health in later life. Sexuality is an integral part of the human experience, yet it’s often an overlooked aspect of aging. As people grow older, discussions about their sexual lives tend to wane. Previous studies have often focused on the physical and mental health benefits of sexual activity...
When and how humans first settled in the Americas is a subject of considerable controversy. In the 20th century, archaeologists believed that humans reached the North American interior no earlier than around 14,000 years ago.
But our new research found something different. Our latest study supports the view that people were in America about 23,000 years ago.
The 20th century experts thought the appearance of humans had coincided with the formation of an ice-free corridor between two immense ice sheets straddling what’s now Canada and the northern US. According to this idea, the corridor, caused by melting at the end of the last Ice Age, allowed humans to trek from Alaska into the heart of North America.
Gradually, this orthodoxy crumbled. In recent decades, dates for the earliest evidence of people have crept back from 14,000 years ago to 16,000 years ago. This is still consistent with humans only reaching the Americas as the last Ice Age was ending.
In September 2021, we published a paper in Science that dated fossil footprints uncovered in New Mexico to around 23,000 years ago – the height of the last Ice Age. They were made by a group of people passing by an ancient lake near what’s now White Sands. The discovery added 7,000 years to the record of humans on the continent, rewriting American prehistory.
If humans were in America at the height of the last Ice Age, either the ice posed few barriers to their passage, or humans had been there for much longer. Perhaps they had reached the continent during an earlier period of melting.
Our conclusions were criticized, however we have now published evidence confirming the early dates.
Dating the pollen
For many people, the word pollen conjures up a summer of allergies, sneezing and misery. But fossilized pollen can be a powerful scientific tool.
In our 2021 study, we carried out radiocarbon dating on common ditch grass seeds found in sediment layers above and below where the footprints were found. Radiocarbon dating is based on how a particular form – called an isotope – of carbon (carbon-14) undergoes radioactive decay in organisms that have died within the last 50,000 years.
Some researchers claimed that the radiocarbon dates in our 2021 research were too old because they were subject to something called the “hard water” effect. Water contains carbonate salts and therefore carbon. Hard water is groundwater that has been isolated from the atmosphere for some period of time, meaning that some of its carbon-14 has already undergone radioactive decay.
Common ditch grass is an aquatic plant and the critics said seeds from this plant could have consumed old water, scrambling the dates in a way that made them seem older than they were.
It’s quite right that they raised this issue. This is the way that science should proceed, with claim and counter-claim.
How did we test our claim?
Radiocarbon dating is robust and well understood. You can date any type of organic matter in this way as long as you have enough of it. So two members of our team, Kathleen Springer and Jeff Pigati of the United States Geological Survey set out to date the pollen grains. However, pollen grains are really small, typically about 0.005 millimetres in diameter, so you need lots of them.
This posed a formidable challenge: you need thousands of them to get enough carbon to date something. In fact, you need 70,000 grains or more.
Medical science provided a remarkable solution to our conundrum. We used a technique called flow cytometry, which is more commonly used for counting and sampling individual human cells, to count and isolate fossil pollen for radiocarbon dating.
Flow cytometry uses the fluorescent properties of cells, stimulated by a laser. These cells move through a stream of liquid. Fluorescence causes a gate to open, allowing individual cells in the flow of liquid to be diverted, sampled, and concentrated.
We have pollen grains in all sediment layers between the footprints at White Sands, which allows us to date them. The key advantage of having so much pollen is that you can pick plants like pine trees that are not affected by old water. Our samples were processed to concentrate the pollen within them using flow cytometry.
After a year or more of labour intensive and expensive laboratory work, we were rewarded with dates based on pine pollen that validated the original chronology of the footprints. They also showed that old water effects were absent at this site.
The pollen also allowed us to reconstruct vegetation that was growing when people made the footprints. We got exactly the kinds of plants we would expect to have been there during the Ice Age in New Mexico.
We also used a different dating technique called optically stimulated luminescence (OSL) as an independent check. OSL relies on the accumulation of energy within buried grains of quartz over time. This energy comes from the background radiation that’s all around us.
The more energy we find, the older we can assume the quartz grains are. This energy is released when the quartz is exposed to light, so what you are dating is the last time the quartz grains saw sunlight.
To sample the buried quartz, you drive metal tubes into the sediment and remove them carefully to avoid exposing them to light. Taking quartz grains from the centre of the tube, you expose them to light in the lab and measure the light emitted by grains. This reveals their age. The dates from OSL supported those we got using other techniques.
The humble pollen grain and some marvelous medical technology helped us confirm the dates the footprints were made, and when people reached the Americas.
In 2020, Oxford-based philosopher Toby Ord published a book called The Precipice about the risk of human extinction. He put the chances of “existential catastrophe” for our species during the next century at one in six.
It’s quite a specific number, and an alarming one. The claim drew headlines at the time, and has been influential since – most recently brought up by Australian politician Andrew Leigh in a speech in Melbourne.
It’s hard to disagree with the idea we face troubling prospects over the coming decades, from climate change, nuclear weapons and bio-engineered pathogens (all big issues in my view), to rogue AI and large asteroids (which I would see as less concerning).
But what about that number? Where does it come from? And what does it really mean?
Coin flips and weather forecasts
To answer those questions, we have to answer another first: what is probability?
The most traditional view of probability is called frequentism, and derives its name from its heritage in games of dice and cards. On this view, we know there is a one in six chance a fair die will come up with a three (for example) by observing the frequency of threes in a large number of rolls.
Or consider the more complicated case of weather forecasts. What does it mean when a weatherperson tells us there is a one in six (or 17%) chance of rain tomorrow?
It’s hard to believe the weatherperson means us to imagine a large collection of “tomorrows”, of which some proportion will experience precipitation. Instead, we need to look at a large number of such predictions and see what happened after them.
If the forecaster is good at their job, we should see that when they said “one in six chance of rain tomorrow”, it did in fact rain on the following day one time in every six.
So, traditional probability depends on observations and procedure. To calculate it, we need to have a collection of repeated events on which to base our estimate.
Can we learn from the Moon?
So what does this mean for the probability of human extinction? Well, such an event would be a one-off: after it happened, there would be no room for repeats.
Instead, we might find some parallel events to learn from. Indeed, in Ord’s book, he discusses a number of potential extinction events, some of which can potentially be examined in light of a history.
Counting craters on the Moon can gives us clues about the risk of asteroid impacts on Earth. NASA
For example, we can estimate the chances of an extinction-sized asteroid hitting Earth by examining how many such space rocks have hit the Moon over its history. A French scientist named Jean-Marc Salotti did this in 2022, calculating the odds of an extinction-level hit in the next century at around one in 300 million.
Of course, such an estimate is fraught with uncertainty, but it is backed by something approaching an appropriate frequency calculation. Ord, by contrast, estimates the risk of extinction by asteroid at one in a million, though he does note a considerable degree of uncertainty.
A ranking system for outcomes
There is another way to think about probability, called Bayesianism after the English statistician Thomas Bayes. It focuses less on events themselves and more on what we know, expect and believe about them.
In very simple terms, we can say Bayesians see probabilities as a kind of ranking system. In this view, the specific number attached to a probability shouldn’t be taken directly, but rather compared to other probabilities to understand which outcomes are more and less likely.
Ord’s book, for example, contains a table of potential extinction events and his personal estimates of their probability. From a Bayesian perspective, we can view these values as relative ranks. Ord thinks extinction from an asteroid strike (one in a million) is much less likely than extinction from climate change (one in a thousand), and both are far less likely than extinction from what he calls “unaligned artificial intelligence” (one in ten).
The difficulty here is that initial estimates of Bayesian probabilities (often called “priors”) are rather subjective (for instance, I would rank the chance of AI-based extinction much lower). Traditional Bayesian reasoning moves from “priors” to “posteriors” by again incorporating observational evidence of relevant outcomes to “update” probability values.
And once again, outcomes relevant to the probability of human extinction are thin on the ground.
Subjective estimates
There are two ways to think about the accuracy and usefulness of probability calculations: calibration and discrimination.
Calibration is the correctness of the actual values of the probabilities. We can’t determine this without appropriate observational information. Discrimination, on the other hand, simply refers to the relative rankings.
We don’t have a basis to think Ord’s values are properly calibrated. Of course, this is not likely to be his intent. He himself indicates they are mostly designed to give “order of magnitude” indications.
Even so, without any related observational confirmation, most of these estimates simply remain in the subjective domain of prior probabilities.
Not well calibrated – but perhaps still useful
So what are we to make of “one in six”? Experience suggests most people have a less than perfect understanding of probability (as evidenced by, among other things, the ongoing volume of lottery ticket sales). In this environment, if you’re making an argument in public, an estimate of “probability” doesn’t necessarily need to be well calibrated – it just needs to have the right sort of psychological impact.
From this perspective, I’d say “one in six” fits the bill nicely. “One in 100” might feel small enough to ignore, while “one in three” might drive panic or be dismissed as apocalyptic raving.
As a person concerned about the future, I hope risks like climate change and nuclear proliferation get the attention they deserve. But as a data scientist, I hope the careless use of probability gets left by the wayside and is replaced by widespread education on its true meaning and appropriate usage.
Authorities raised the death toll to 42 on Friday after a glacial lake overwhelmed a dam in the Indian Himalayas earlier this week, in one of the worst disasters in the area in nearly half a century.
The dam breach on Wednesday, which was caused in part by extreme rainfall, had long been predicted by scientists and environmental advocates due both to the climate crisis and inadequate regulations.
"We knew that this was coming," Gyatso Lepcha, general secretary of local environmental group Affected Citizens of Teesta, said in a statement reported by The Associated Press.
The flooding occurred in India's Sikkim state after South Llonak Lake overflowed and breached the state's largest dam, AP reported further.
"Floodwaters have caused havoc in four districts of the state, sweeping away people, roads, bridges," Indian Army spokesperson Himanshu Tiwari toldAFP.
The floodwaters destroyed 15 bridges, according to Reuters, and damaged or flattened more than 270 homes, AP reported.
State official Tseten Bhutia said that around 2,400 people had been rescued and 7,600 were living in emergency settlements, according to Reuters. Overall, the Sikkim government said that the disaster impacted a total of 22,000 people.
"It was already predicted in 2021 that this lake would breach and impact the dam."
"We got calls from people that river levels could rise at 3 am and we ran for our lives," 44-year-old Javed Ahmed Ansari, a Teesta valley river-rafting business owner, toldReuters. "We ran towards the hill in the jungle... We saw houses getting swept away. I can now only see the first floor of our house which is filled with sand, everything is submerged."
Officials said Friday that at least 42 people had died and 142 were still missing. After the flood, satellite photos revealed the lake had diminished by two-thirds, according to reporting by CBS and AFP.
The immediate cause of the flooding may be a combination of both a burst of extreme rainfall and a 6.2 magnitude earthquake in neighboring Nepal on Tuesday, according to AP. However, it is exactly the kind of disaster that scientists have warned about as the climate crisis melts Himalayan glaciers, swelling the waters of glacial lakes. South Llonak Lake had been growing faster than any other lake in Sikkim, scientists warned in a 2021 study.
"It was already predicted in 2021 that this lake would breach and impact the dam," Indian Institute of Technology, Indore, glaciologist Farooq Azam toldCBS News. "There has been a substantial increase in the number of glacial lakes as the glaciers are melting due to global warming."
In general, mountain regions are melting twice as fast as the global average due primarily to the burning of fossil fuels. A study published in June found that the Hindu Kush Himalayas could lose 80% of their ice by 2100 if countries don't rapidly phase out oil, gas, and coal. In addition to triggering glacial floods, this would threaten the drinking water source relied on by 2 billion people.
This loss is clashing with the Indian government's attempt to transition to renewable energy by increasing hydroelectric power by 50% by the end of the decade, according to AP. To meet this goal, the government has signed off on hundreds of dams in the Himalayas, but a 2016 study warned that more than 20% of 177 dams in five Himalayan nations were at risk from breaches caused by the overflowing of glacial lakes.
That list included the dam that breached Wednesday, the Teesta 3 hydropower project, which began operating in 2017 after nine years of work. Local watchdog groups had also expressed concerns about its lack of safety features.
"Despite being the biggest project in the state, there were no early warning systems installed even though the glacier overflowing was a known risk," Himanshu Thakkar of South Asian Network for Rivers, Dams, and People told AP.
Wednesday's disaster follows another dam breach in 2021 that killed 81 people in Uttarakhand state. India’s National Disaster Management Agency promised Friday to fit most of the country's 56 at-risk glacial lakes with earning warning systems.
Extreme rainfall triggered by the climate crisis is also proving deadly in India and around the world, with more than 100 killed in northern India in July and nearly 50 in Himachal Pradesh in August.
"Intense rain has led to this catastrophic situation in Sikkim where the rain has triggered a glacial lake outburst flood and damaged a dam, and caused loss of life," International Centre for Integrated Mountain Development ice researcher Miriam Jackson told reporters. "We observe that such extreme events increase in frequency as the climate continues to warm and takes us into unknown territory."
Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.OLIVIA ROSANE Olivia Rosane is a staff writer for Common Dreams.Full Bio >
Women tend to report improved sexual functioning after undergoing breast augmentation surgery, according to new research published in Revista da Associacao Medica Brasileira. The findings suggest that this popular cosmetic procedure not only enhances physical appearance but also positively impacts women’s self-confidence. Breasts hold a central role in how society perceives femininity, sexuality, and maternity. Changes in breast size, shape, or distortions can have a profound effect on a woman’s self-image and quality of life. Dissatisfaction with one’s body, including breast size and contour,...
Amazon is set to launch two satellites on Friday, in its first test mission as part of its plan to deliver the internet from space and compete with Elon Musk’s Starlink service.
The launch window for the Atlas V rocket from the United Launch Alliance (ULA) hub at Kennedy Space Center in Florida is scheduled to open for two hours at 2:00 pm local time (1800 GMT).
Once up and running, the company founded by Jeff Bezos says its Project Kuiper will provide "fast, affordable broadband to unserved and underserved communities around the world," with a constellation of more than 3,200 satellites in low Earth orbit (LEO).
"This is Amazon's first time putting satellites into space, and we're going to learn an incredible amount regardless of how the mission unfolds," said Rajeev Badyal, Project Kuiper's vice president of technology.
The first operational satellites of the Kuiper project are due to be launched in early 2024, according to Amazon, which hopes for initial tests with customers at the end of next year.
The test on Friday will attempt to make contact with probes from Earth, deploy their solar panels, and confirm that all instruments are operating correctly and at the desired temperatures.
The two prototypes will be removed from orbit and disintegrated in the Earth's atmosphere at the end of the test mission.
Musk's SpaceX launched the first batch of its more than 3,700 operational Starlink satellites in 2019 and is by far the biggest player. London-headquartered OneWeb is another early entrant in the emerging sector.
These services are designed to provide internet access to even the most remote and underserved areas around the world, including war zones or disaster-struck areas.
Musk's ownership of Starlink caused uproar in Ukraine last month when it was revealed that he refused to turn on the service for a planned attack by Kyiv forces on Russia's Black Sea navy fleet last year.
Given the technology's strategic importance, governments are also keen to join the rush into the sector.
China plans to launch 13,000 satellites as part of its GuoWang constellation, while Canada's Telesat will add 300 and German start-up Rivada is eyeing 600.
That will be in addition to the European Union's Iris project -- 170 satellites -- and the 300-500 satellites planned to be launched by the US military's Space Development Agency.
The U.N.'s children's welfare agency released a new report Friday making the case for prioritizing the protection of children from fossil fuel-driven climate disasters—with more than 43 million children across the globe internally displaced in a six-year period due to drought, flooding, wildfires, and other extreme events.
In the report Children Displaced in a Changing Climate, United Nations Children's Fund (UNICEF) details how 95% of child displacements in 44 countries from 2016-21 were due to flooding and storms, with 40.9 million children forced from their homes in countries including Guatemala, South Sudan, and Somalia.
"It is terrifying for any child when a ferocious wildfire, storm, or flood barrels into their community," said UNICEF executive director Catherine Russell. "For those who are forced to flee, the fear and impact can be especially devastating, with worry of whether they will return home, resume school, or be forced to move again. Moving may have saved their lives, but it's also very disruptive."
South Sudan and Somalia recorded the greatest number of child displacements from flooding, with 11.8% and 10.7% of their child populations forced to evacuate their homes due to the disasters.
But like much of the Global South, the climate threat facing Somalians is multi-pronged, and more than half of the global child displacements due to drought in the six-year period were also recorded in Somalia.
The report includes the story of a 10-year-old girl named Hibo, whose family was forced to leave their home in Guriel in search of food and water. They traveled for 10 days to reach a camp for internally displaced people.
"We arrived at this camp seven days ago, hoping things will be better," an 18-year-old mother named Ayesha, also living in the camp, told UNICEF. "My family has lost all our cattle and camels. They all died because we had no water to give them. We have nothing."
Children in countries that are facing numerous overlapping crises, including violent conflicts and persistent poverty, are at heightened risk of displacement, as they face "limited investment in risk mitigation and preparedness."
"For example, Haiti is high risk and is also dealing with conflict, violence, poverty, [and] earthquakes," reads the report, which provides the first global analysis of the child displacement crisis. "In Mozambique, poor communities are disproportionately affected with little capacity to recover from consecutive disasters."
"It is in these countries, where risk mitigation, adaptation, and preparedness—including embracing preemptive evacuations and other climate mobility options to save lives and minimize any disruption to children's access to critical services—is most urgent," reported UNICEF.
But the report suggests that people in wealthy countries should not think of climate-driven displacement of families and children as crisis that affects only the Global South.
Wildfires were the cause of 810,000 child displacements, with a third of those taking place in 2020 alone. The countries with the highest number of children driven from their homes by out-of-control blazes were the United States, Canada, and Israel.
Most displacements triggered by wildfires were preemptive evacuations coordinated by federal and state officials, but they carry risks for children's well-being just as the displacements of children in the Horn of Africa and other parts of the world do.
"As wildfires grow more intense, frequent, and widespread, many children who live through them are experiencing lasting psychological trauma such as anxiety, depression and post-traumatic stress disorder," reads the report. "Children may also develop sleep or attention problems or struggle in school. If not managed, their emotional trauma can affect their physical health, potentially leading to chronic health problems, mental illness, and substance use."
The Philippines, India, and China reported the highest numbers of child displacement in absolute numbers, while small nations including Dominica, Saint Maarten, and Northern Mariana Islands recorded the most child displacements relative to their child population.
Seventy-six percent of children in Dominica were displaced in the six-year period due to weather-related events such as Hurricane Maria, which damaged or destroyed 95% of the island's housing stock.
As the United Nations Climate Change Conference (COP28) draws near, UNICEF called on governments to:
Protect children and young people from the impacts of climate change-exacerbated disasters and displacement by ensuring that child-critical services—including education, health, nutrition, social protection, and child protection services—are shock-responsive, portable and inclusive, including for those already uprooted from their homes;
Prepare children and young people to live in a climate-changed world by improving their adaptive capacity and resilience, and enabling their participation in finding inclusive solutions; and
Prioritize children and young people—including those already uprooted from their homes—in disaster and climate action and finance, humanitarian, and development policy, and investments to prepare for a future already happening.
UNICEF projected that flooding has the potential to displace nearly 96 million children over the next 30 years, based on current climate data.
"As the impacts of climate change escalate, so too will climate-driven movement," Russell said. "We have the tools and knowledge to respond to this escalating challenge for children, but we are acting far too slowly. We need to strengthen efforts to prepare communities, protect children at risk of displacement, and support those already uprooted."
The origins of nanotechnology predate Bawendi, Brus and Ekimov’s work on quantum dots – the physicist Richard Feynman speculated on what could be possible through nanoscale engineering as early as 1959, and engineers like Erik Drexler were speculating about the possibilities of atomically precise manufacturing in the 1980s. However, this year’s trio of Nobel laureates were part of the earliest wave of modern nanotechnology where researchers began putting breakthroughs in material science to practical use.
Quantum dots brilliantly fluoresce: They absorb one color of light and reemit it nearly instantaneously as another color. A vial of quantum dots, when illuminated with broad spectrum light, shines with a single vivid color. What makes them special, though, is that their color is determined by how large or small they are. Make them small and you get an intense blue. Make them larger, though still nanoscale, and the color shifts to red.
This property has led to many arresting images of rows of vials containing quantum dots of different sizes going from a striking blue on one end, through greens and oranges, to a vibrant red at the other. So eye-catching is this demonstration of the power of nanotechnology that, in the early 2000s, quantum dots became iconic of the strangeness and novelty of nanotechnology.
But, of course, quantum dots are more than a visually attractive parlor trick. They demonstrate that unique, controllable and useful interactions between matter and light can be achieved through engineering the physical form of matter – modifying the size, shape and structure of objects, for instance – rather than playing with the chemical bonds between atoms and molecules. The distinction is an important one, and it’s at the heart of modern nanotechnology.
Skip chemical bonds, rely on quantum physics
The wavelengths of light that a material absorbs, reflects or emits are usually determined by the chemical bonds that bind its constituent atoms together. Play with the chemistry of a material and it’s possible to fine-tune these bonds so that they give you the colors you want. For instance, some of the earliest dyes started with a clear substance such as aniline, transformed through chemical reactions to the desired hue.
Quantum dots work differently. Rather than depending on chemical bonds to determine the wavelengths of light they absorb and emit, they rely on very small clusters of semiconducting materials. It’s the quantum physics of these clusters that then determines what wavelengths of light are emitted – and this in turn depends on how large or small the clusters are.
This ability to tune how a material behaves by simply changing its size is a game changer when it comes to the intensity and quality of light that quantum dots can produce, as well as their resistance to bleaching or fading, their novel uses and – if engineered smartly – their toxicity.
Of course, few materials are completely nontoxic, and quantum dots are no exception. Early quantum dots were often based on cadmium selenide for instance – the component materials of which are toxic. However, the potential toxicity of quantum dots needs to be balanced by the likelihood of release and exposure and how they compare with alternatives.
Since its earlier days, quantum dot technology has evolved in safety and usefulness and has found its way into an increasing number of products, from displays and lighting, to sensors, biomedical applications and more. In the process, some of their novelty has perhaps worn off. It can be hard to remember just how much of a quantum leap the technology is that’s being used to promote the latest generation of flashy TVs, for instance.
And yet, quantum dots are a pivotal part of a technology transition that’s revolutionizing how people work with atoms and molecules.
This concept is intuitive when it comes to computing, where programmers use the “base code” of 1,s and 0’s, albeit through higher level languages. It also makes sense in biology, where scientists are becoming increasingly adept at reading and writing the base code of DNA and RNA – in this case, using the chemical bases adenine, guanine, cytosine and thymine as their coding language.
This ability to work with base codes also extends to the material world. Here, the code is made up of atoms and molecules and how they are arranged in ways that lead to novel properties.
Bawendi, Brus and Ekimov’s work on quantum dots is a perfect example of this form of material-world base coding. By precisely forming small clusters of particular atoms into spherical “dots,” they were able to tap into novel quantum properties that would otherwise be inaccessible. Through their work they demonstrated the transformative power that comes through coding with atoms.
An example of ‘base coding’ using atoms to create a material with novel properties is a single molecule ‘nanocar’ crafted by chemists that can be controlled as it ‘drives’ over a surface. Alexis van Venrooy/Rice University, CC BY-ND
They paved the way for increasingly sophisticated nanoscale base coding that is now leading to products and applications that would not be possible without it. And they were part of the inspiration for a nanotechnology revolution that is continuing to this day. Reengineering the material world in these novel ways far transcends what can be achieved through more conventional technologies.
This possibility was captured in a 1999 U.S. National Science and Technology Council report with the title Nanotechnology: Shaping the World Atom by Atom. While it doesn’t explicitly mention quantum dots – an omission that I’m sure the authors are now kicking themselves over – it did capture just how transformative the ability to engineer materials at the atomic scale could be.
This atomic-level shaping of the world is exactly what Bawendi, Brus and Ekimov aspired to through their groundbreaking work. They were some of the first materials “base coders” as they used atomically precise engineering to harness the quantum physics of small particles – and the Nobel committee’s recognition of the significance of this is well deserved.
Each October, the Nobel Prizes celebrate a handful of groundbreaking scientific achievements. And while many of the awarded discoveries revolutionize the field of science, some originate in unconventional places. For George de Hevesy, the 1943 Nobel Laureate in chemistry who discovered radioactive tracers, that place was a boarding house cafeteria in Manchester, U.K., in 1911.
De Hevesey had the sneaking suspicion that the staff of the boarding house cafeteria where he ate at every day was reusing leftovers from the dinner plates – each day’s soup seemed to contain all of the prior day’s ingredients. So he came up with a plan to test his theory.
At the time, de Hevesy was working with radioactive material. He sprinkled a small amount of radioactive material in his leftover meat. A few days later, he took an electroscope with him to the kitchen and measured the radioactivity in the prepared food.
His landlady, who was to blame for the recycled food, exclaimed “this is magic” when de Hevesy showed her his results, but really, it was just the first successful radioactive tracer experiment.
We are a team of chemists and physicists who work at the Facility for Rare Isotope Beams, located at Michigan State University. De Hevesy’s early research in the field has revolutionized the way that modern scientists like us use radioactive material, and it has led to a variety of scientific and medical advances.
The nuisance of lead
A year before conducting his recycled ingredients experiment, Hungary-born de Hevesy had traveled to the U.K. to start work with nuclear scientist Ernest Rutherford, who’d won a Nobel Prize just two years prior.
Rutherford was at the time working with a radioactive substance called radium D, a valuable byproduct of radium because of its long half-life (22 years). However, Rutherford couldn’t use his radium D sample, as it had large amounts of lead mixed in.
When de Hevesy arrived, Rutherford asked him to separate the radium D from the nuisance lead. The nuisance lead was made up of a combination of stable isotopes of lead (Pb). Each isotope had the same number of protons (82 for lead), but a different number of neutrons.
De Hevesy worked on separating the radium D from the natural lead using chemical separation techniques for almost two years, with no success. The reason for his failure was that, unknown to anyone at the time, radium D was actually a different form of lead – namely the radioactive isotope, or radioisotope Pb-210.
Nevertheless, de Hevesy’s failure led to an even bigger discovery. The creative scientist figured out that if he could not separate radium D from natural lead, he could use it as a tracer of lead.
Radioactive isotopes, like Pb-210, are unstable isotopes, which means that over time they will transform into a different element. During this transformation, called radioactive decay, they typically release particles or light, which can be detected as radioactivity.
Through radioactivity, an unstable isotope can turn from one element to another.
This radioactivity acts as a signature indicating the presence of the radioactive isotope. This critical property of radioisotopes allows them to be used as tracers.
Radium D as a tracer
A tracer is a substance that stands out in a crowd of similar material because it has unique qualities that make it easy to track.
For example, if you have a group of kindergartners going on a field trip and one of them is wearing a smartwatch, you can tell if the group went to the playground by tracking the GPS signal on the smartwatch. In de Hevesy’s case, the kindergartners were the lead atoms, the smart watch was radium D, and the GPS signal was the emitted radioactivity.
He collaborated with Fritz Paneth, who had also attempted the impossible task of separating radium D from lead without success. The two scientists “spiked” samples of different chemical compounds with small amounts of a radioactive tracer. This way they could study chemical processes by tracking the movement of the radioactivity across different chemical reactions
De Hevesy continued his work studying chemical processes using different isotopic markers for many years. He even was the first to introduce nonradioactive tracers. One nonradioactive tracer he studied was a heavier isotope of hydrogen, called deuterium. Deuterium is 10,000 times less abundant than common hydrogen, but is roughly twice as heavy, which makes it easier to separate the two.
De Hevesy and his co-author used deuterium to track water in their bodies. In their investigations, they took turns ingesting samples and measuring the deuterium in their urine to study the elimination of water from the human body.
De Hevesy was awarded the 1943 Nobel Prize in chemistry “for his work on the use of isotopes as tracers in the study of chemical processes.”
Radioactive tracers today
More than a century after de Hevesy’s experiments, many fields now routinely use radioactive tracers, from medicine to materials science and biology.
In modern research, scientists focus on producing new isotopes and on developing procedures to use radioactive tracers more efficiently. The Facility for Rare Isotope Beams, or FRIB, where the three of us work, has a program dedicated to the production and harvesting of unique radioisotopes. These radioisotopes are then used in medical and other applications.
Scientists Greg Severin and Katharina Domnanich at the Facility for Rare Isotope Beams. Facility for Rare Isotope Beams.
One recent study involved the isolation of the radioisotope Zn-62 from the irradiated water. This was a challenging task considering there were 100 quadrillion times more water molecules than Zn-62 atoms. Zn-62 is an important radioactive tracer utilized to follow the metabolism of zinc in plants and in nuclear medicine.
Eighty years ago, de Hevesy managed to take a dead-end separation project and turn it into a discovery that created a new scientific field. Radioactive tracers have already changed human lives in so many ways. Nevertheless, scientists are continuing to develop new radioactive tracers and find innovative ways to use them.