Using radar, a Nasa spacecraft, Lunar Reconnaissance Orbiter (LRO), has confirmed the existence of caves beneath the lunar surface. Here’s why such geological features will be key for establishing a base on the Moon, and what they can tell us about Earth and our Moon’s shared cosmological origins.
Lunar orbiting satellites first spotted pits on the Moon’s surface decades ago. Many of these were thought to be openings that connected to substantial underground tunnels that form through volcanic processes, but only now has this been confirmed through the analysis of radar data.
Some of the tunnels thought to exist on the Moon are expected to be lava tubes, which are also found on Earth. When molten lava flows out of the ground, the lava stream eventually cools and hardens into a crust. The lava inside is still molten, and continues to flow. Once the lava has flowed away, it leaves an empty tunnel called a lava tube. These formation processes are thought to be be very similar on the Earth and the Moon.
The data used in the latest study was collected in 2010 by LRO but only recently analysed using state of the art signal processing techniques. Radar (electromagnetic waves of 12.6cm wavelength) fired at acute angles towards these lunar pits, partially illuminated the shadowed subterranean areas to generate measurable radar echo signals.
The pit in Mare Tranquillitatis leads to an underground cave system. Nasa
The timing and amplitude of the reflected signals allowed researchers to compare with simulations and build up a picture of the underground terrain. Data indicate that the largest “Mare Tranquillitatis” pit leads to a cave 80 metres long and 45 metres wide: an area equivalent to around half a football pitch.
It is likely that the lunar surface is home to hundreds of such caves. It is widely thought that around 4.5 billion years ago, a young Earth violently collided with a Mars-sized proto-planet, splitting our youthful planet into the Earth and Moon system we have today.
After this high energy impact, the Moon may have been molten. It is therefore hardly surprising that caves of seemingly volcanic origin, bearing striking similarities with volcanic caves here on Earth, are present on the Moon. However, we don’t need to worry about astronauts dealing with the dangers of a volcanic eruption; volcanic activity on the Moon petered out entirely around 50 million years ago.
The Moon is thought to have formed when a Mars-sized object slammed into Earth. NASA/JPL-CALTECH/T. PYLE.
A home from home?
On Earth, we live in an unusually fortuitous environment, which protects us from threats from outer space. For example, Jupiter, the largest planet in our solar system, is well placed to gravitationally drag asteroids away from Earth. This minimises the frequency of cataclysmic asteroid collisions with our planet – such as the one that spelled the end of the dinosaurs.
One less obvious threat to life on Earth is ionizing radiation. The whole solar system is constantly bathed in a soup of charged particles called galactic cosmic rays, which are accelerated to huge speeds by distant supernova explosions, sending them on a collision course with Earth.
In addition, periodic events called coronal mass ejections from our own sun fling highly energetic particles in our direction in much larger numbers, but on a less frequent basis.
Lava tubes like this one may also exist on the Moon. NPS / B Michel
The Earth’s magnetic field protects us from this radiation to a large degree, by funneling the charged particles towards the north and south poles. This is the origin of aurora borealis and australis that light up the night sky at high latitudes. The Earth’s thick atmosphere also protects us, but we still get some exposure: a return transatlantic flight, where we are higher up in the atmosphere, gives the traveller a dose of radiation equivalent to five X-ray scans.
Now spare a thought for our Moon, which possesses neither an atmosphere nor notable magnetic field. Far from being a “sea of tranquility” (the name of the site of the first human landing on the Moon in 1969) the lunar surface is constantly bombarded by high energy radiation.
This poses a serious challenge for populating a Moon base with humans. Astronauts bouncing about on the lunar surface will soak up about 10 times more radiation than experienced on a transatlantic flight and about 200 times what we get on Earth’s surface.
Although our bodies can deal with the generally harmless low levels of background radiation we experience on Earth, exposure to high levels of ionising radiation can have serious health implications. When ionising radiation interacts with the body, it can ionise the atoms contained within cells, stripping them of electrons. This damage can sometimes prevent DNA from replicating properly, and in extreme cases, can cause cell death.
For these reasons, any Moon base must provide adequate radiation shielding to protect its inhabitants. However, radiation shielding is best provided by dense material, which is expensive to transport to the Moon from Earth.
Hence, naturally shielded areas, like the recently discovered caves, are being earmarked as possible locations for human habitation on the Moon. These caves would afford its residents a whopping 130 to 170 meters of solid rock shielding – enough to halt even the highest energy radiation.
When considering human settlements on the Moon, Mars and further afield, much
attention is given to the travel times, food and radiation risk. We’ll undoubtedly face a harsh environment in deep space and some thinkers have been pointing to genome editing as a way to ensure that humans can tolerate the severe conditions as they venture further into the solar system.
In January, I was fortunate to attend a much-anticipated debate between astronomer royal Lord Martin Rees and Mars exploration advocate Dr Robert Zubrin. The event at the British Interplanetary Society took on the topic of whether the exploration of Mars should be human or robotic.
In a recent book called The End of Astronauts, Lord Rees and co-author Donald Goldsmith outline the benefits of exploration of the solar system using robotic spacecraft and vehicles, without the expense and risk of sending humans along for the ride. Dr Zubrin supports human exploration. Where there was some agreement was over Rees’s advocacy of using gene editing technology to enable humans to overcome the immense challenges of becoming an interplanetary species.
Our genome is all the DNA present in our cells. Since 2011, we have been able to easily and accurately edit genomes. First came a molecular tool called Crispr-Cas9, which today can be used in a high school lab for very little cost and has even been used on the International Space Station. Then came techniques called base and prime editing, through which miniscule changes can be made in the genome of any living organism.
The potential applications of gene editing for allowing us to travel further are almost limitless. One of the most problematic hazards astronauts will encounter in deep space is a higher dosage of radiation, which can cause havoc with many processes in the body and increase the longer-term risk of cancer.
Perhaps, using genome editing, we could insert genes into humans from plants and bacteria that are able to clean up radiation in the event of radioactive waste spills and nuclear fallout. It sounds like science fiction, but eminent thinkers such as Lord Rees believe this is key to our advancement across the solar system.
Identifying and then inserting genes into humans that slow down aging and counter cellular breakdown could also help. We could also engineer crops that resist the effects of exposure to radioactivity as crews will need to grow their own food. We could also personalise medicine to an astronaut’s needs based on their particular genetic makeup.
Imagine a future where the human genome is so well understood it has become pliable under this new, personalised medicine.
Kate Rubins was the first person to sequence DNA in space. NASA
Genes for extremes
Tardigrades are microscopic animals sometimes referred to as “water bears”. Experiments have shown that these tiny creatures can tolerate extreme temperatures, pressures, high radiation and starvation. They can even tolerate the vacuum of space.
Geneticists are eager to understand their genomes and a paper published in Nature sought to uncover the key genes and proteins that give the miniature creatures this extraordinary stress tolerance. If we could insert some of the genes involved into crops, could we make them tolerant to the highest levels of radiation and environmental stress? It’s worth exploring.
Even more intriguing is whether inserting tardigrade genes into our own genome could make us more resilient to the harsh conditions in space. Scientists have already shown that human cells in the lab developed increased tolerance to X-ray radiation when tardigrade genes were inserted into them.
Transferring genes from tardigrades is just one speculative example of how we might be able engineer humans and crops to be more suited to space travel.
Tardigrades are incredibly resilient organisms. Dotted Yeti
We’ll need much more research if scientists are ever to get to this stage. However, in the past, several governments have been keen to enforce tight restrictions on how genome editing is used, as well as on other technologies for inserting genes from one species into another.
Germany and Canada are among the most cautious, but elsewhere restrictions seem to be relaxing.
In November 2018, the Chinese scientist He Jiankui announced that he had created the first gene edited babies. He had introduced a gene into the unborn twins that confers resistance to HIV infection.
The scientist was subsequently jailed. But he has since been released and allowed to carry out research again.
In the new space race, certain countries may go to lengths with genome editing that other nations, especially in the west where restrictions are already tight, may not. Whoever wins would reap enormous scientific and economic benefits.
If Rees and the other futurists are right, this field has the potential to advance our expansion into the cosmos. But society will need to agree to it.
It’s likely there will be opposition, because of the deep-seated fears of altering the human species forever. And with base and prime editing now having advanced the precision of targeted gene editing, it’s clear that the technology is moving faster than the conversation.
One country or another is likely to take the leap where others pull back from the brink. Only then will we find out just how viable these ideas really are. Until then, we can only speculate with curiosity, and perhaps excitement too.
Fashion is a dynamic business. Most apparel brands make at least two to four collections per year. While selling current seasonal collections, brands plan for the next ones at least a year in advance, identifying market trends and materials. The selling window is around three months, and unsold inventories represent financial loss.
Fast fashion companies introduce new lines even more frequently, reducing the amount of time needed to design, produce and market new items.
Tech and fashion
The fashion industry is familiar with experimenting with technological frontiers. Some of the most significant technological breakthrough are laser cutting, computer-aided design and more recently, the use of 3D printing in early 2010.
Fashion companies also use blockchains for product authentication, traceability and digital IDs, including those integrated by LVMH/Louis Vuitton, product authentication and traceability.
Additionally, companies have incorporated augmented reality into marketing and retail strategies to create immersive and interactive customer experiences.
Generative AI could become a game-changer for the fashion industry, adding between US$150 and US$250 billion to operating profits within three to five years. While the fashion sector has only started integrating AI, the opportunities and challenges it presents are evident across all business processes.
Generative AI could help fashion companies improve their processes, bring their products to the market faster, sell more efficiently and improve customer experience. Generative AI could also support product development by analyzing large social media and runway show datasets to identify emerging fashion trends.
Estée Lauder Companies and Microsoft have teamed up to open an in-house AI innovation lab for identifying and responding to trends, informing product development and improving customer experiences.
Designers could use AI to visualize different materials and patterns based on past consumer preferences. For example, the Tommy Hilfiger Corporation is collaborating with IBM and the Fashion Institute of Technology in New York on the Reimagine Retail project, which uses AI to analyze consumer data and design new fashion collections.
NOWNESS looks at Dutch designer Iris van Herpen’s imaginative uses of AI.
AI and sustainability
AI helps in creating more sustainable fashion practices by optimizing the use of resources, recycling materials and reducing waste through more precise manufacturing processes and efficient supply chain and inventory management. For example, H&M uses AI to improve its recycling processes, sort and categorize garments for recycling and promote a circular fashion economy.
AI can improve operations and supply chain processes by optimizing inventory management, predicting sales based on historical data, and reducing overstock and stock-outs. Brands like Zara and H&M already use AI to control supply chains, promoting sustainability by optimizing stock levels and reducing waste. Zara also introduced AI and robotics into their retail stores to speed up online order pick-ups.
AI-powered virtual try-on solutions allow customers to see how clothes will look on them without physically trying them, enhancing the online shopping experience and reducing return rates. Virtual try-ons are already a reality in digital companies, such as prescription eyewear retailer Warby Parker and Amazon.
Another example is Modiface, acquired by French multinational personal care company L’Oréal in 2018, which provides AR-based virtual try-ons for makeup and fashion accessories.
Virtual try-ons help buyers make decisions and reduce returns. (Shutterstock)
Effective campaigning
AI can also deliver customized customer experiences. Some brands, such as Reebok and Versace, invite their customers to use AI tools to design products inspired by the brand’s feel and look.
AI-powered tools can help marketing teams target and maximize the impact of their communication campaigns, potentially reducing marketing costs.
The fashion business includes everything from small companies to global chains, haute couture to ready-to-wear, mass market and fast fashion. Each brand must understand where AI could generate value for their business without diluting their brand identity.
The biggest challenge, however, is to avoid homogenization. Generative AI should not replace human creativity but create new spaces and processes.
Fashion companies should be prepared to manage the associated risks with new technologies, particularly regarding intellectual property, creative rights and brand reputation. One of the primary issues is the potential infringement of intellectual property related to training data.
GenAI models are trained on vast design datasets, often containing copyrighted works. This can lead to legal disputes over originality and ownership. A related risk is bias and fairness in generative-AI systems, which may present reputational challenges for brands that rely on the technology.
The ambiguity surrounding creative rights in the age of AI is another concern. It’s challenging to determine who holds the creative rights to a design, whether it’s the designer who conceptualized the idea, the developer who built the AI or the AI itself. This ambiguity can dilute the authenticity of a brand’s creative expression, potentially harming its reputation if consumers perceive the brand as less innovative or authentic.
Plastic microbeads, those tiny troublemakers found in the personal care products of the early 1990s to the late 2010s, wreak havoc on the environment. These minuscule bits, smaller than a sesame seed, escape the clutches of wastewater treatment plants, accumulating in oceans and rivers where they pose a threat to marine life.
Thankfully, soaps and scrubs containing plastic microbeads are impossible to find on today’s store shelves. In recent years, many countries have recognized these microbeads as a source of marine plastic pollution and banned them from personal hygiene products. Microbead bans make room for more environmentally friendly substitutes, allowing consumers to continue to experience that satisfying deep-cleaning feeling without harming the environment.
Instead of relying on synthetic plastics, research shows that a treasure trove of possibilities is hidden within biowaste. Once such gem is brewer’s spent grain (BSG), the leftovers from brewing beer. Inexpensive and abundant, BSG is used in animal feed, biogas production, compost and fertilizer.
More recently, BSG is used as a protein- and fibre-rich ingredient in crackers, breads and cookies.
Cellulose — the main molecule constituent in plant cell walls — is a key component of brewer’s spent grain. For over a century, scientists have prepared vast amounts of cellulose-based materials by transforming trees through a relatively straightforward chemical process. Trees are felled, debarked, chipped, pulped and bleached, then the cellulose that remains is shaped into its desired final form.
Cellulose fibers don’t dissolve in most solvents, and thankfully so, otherwise cotton t-shirts would be washed away in the rain and acetone-soaked tissues would melt instead of removing nail polish.
However, sodium hydroxide dissolved in water in various concentrations provides a more sustainable option. Additionally, with sodium hydroxide, cellulose can be converted back into a solid through a simple neutralization reaction.
This alkali-based process can yield pure cellulose microbeads, which were first prepared about a decade ago. Cellulose pulp is dissolved in aqueous sodium hydroxide, then neutralized, one drop at a time, in an acid bath. When the acid bath is drained away, spherical cellulose-based microbeads remain.
Fine-tuning the process
Our research considered whether the abundance of cellulose-based biowaste generated from agri-food industries could generate microbeads. With BSG as our cellulose-rich starting material and exfoliating microbeads as our goal, we started experimenting in the lab.
Inexpensive and abundant, brewer’s spent grain is used in animal feed, biogas production, compost and fertilizer. (Shutterstock)
BSG presented a challenge for creating pure cellulose microbeads due to the complexity of its composition. Besides cellulose, BSG contains hemicellulose, lignin, proteins, lipids and small amounts of ash, all carefully intertwined to create different plant-cell structures.
To overcome this obstacle, dilute acid hydrolysis loosens BSG’s cellulose and other fibers (hemicellulose and lignin). Coarse filtration washes simple sugars and proteins away, leaving behind a cellulose- and lignin-enriched pulp.
Next steps involve fine-tuning the sodium hydroxide solution. Only at specific temperatures and concentrations are sodium hydroxide solutions stronger than the bonds that hold cellulosic fibers together; this is true of more complex BSG-pulp as well.
Our experiments revealed a narrow processing window where BSG pulp completely dissolved, aided by small amounts of zinc oxide. Then, introducing these BSG-solutions, drop by drop, into an acid bath simultaneously achieved our shaping and solidification goals.
After a few hours, the acid bath was drained away and smooth, spherical BSG-based microbeads remained.
Finally, strength and stability testing proved that BSG beads had the necessary strength to hold up to their conventional plastic counterparts. When incorporated into soaps, BSG-based microbeads performed better than other plastic microbead alternatives currently available, such as ground coconut shells and apricot pits.
Plastic microbeads, once popular in the personal care products of the early 1990s to the late 2010s, are environmentally damaging. (Shutterstock)
Creative solutions
The transformation of brewery waste into exfoliating microbeads represents yet another step towards a more sustainable future. By harnessing the properties of the cellulose and lignin present in BSG, this innovation demonstrates the potential of waste materials to contribute to sustainable solutions.
This success ultimately underscores the importance of research and innovation in transitioning towards more environmentally friendly practices. Finally, it encourages exploring other similar opportunities to reduce our ecological footprint.
If it’s possible to transform brewery waste into a valuable component of personal hygiene products, just imagine what other opportunities may be found in the trash.
I am a biochemist and molecular biologist studying the roles microbes play in health and disease. I also teach medical students and am interested in how the public understands science.
Here are some facts about vaccines that skeptics like Kennedy get wrong:
Nevertheless, the false claim that vaccines cause autism persists despite studyafter study of large populations throughout the world showing no causal link between them.
Claims about the dangers of vaccines often come from misrepresenting scientific research papers. Kennedy cites a 2005 report allegedly showing massive brain inflammation in monkeys in response to vaccination, when in fact the authors of that study state that there were no serious medical complications. A separate 2003 study that Kennedy claimed showed a 1,135% increase in autism in vaccinated versus unvaccinated children actually found no consistent significant association between vaccines and neurodevelopmental outcomes.
Kennedy also claims that a 2002 vaccine study included a control group of children 6 months of age and younger who were fed mercury-contaminated tuna sandwiches. This claim is false.
Kennedy is co-counsel with a law firm that is suing the pharmaceutical company Merck based in part on the unfounded assertion that the aluminum in one of its vaccines causes neurological disease. Aluminum is added to many vaccines as an adjuvant to strengthen the body’s immune response to the vaccine, thereby enhancing the body’s defense against the targeted microbe.
The law firm’s claim is based on a 2020 report showing that brain tissue from some patients with Alzheimer’s disease, autism and multiple sclerosis have elevated levels of aluminum. The authors of that study do not assert that vaccines are the source of the aluminum, and vaccines are unlikely to be the culprit.
Notably, the brain samples analyzed in that study were from 47- to 105-year-old patients. Most people are exposed to aluminum primarily through their diets, and aluminum is eliminated from the body within days. Therefore, aluminum exposure from childhood vaccines is not expected to persist in those patients.
Vaccines undergo the same approval process as other drugs
Clinical trials for vaccines and other drugs are blinded, randomized and placebo-controlled studies. For a vaccine trial, this means that participants are randomly divided into one group that receives the vaccine and a second group that receives a placebo saline solution. The researchers carrying out the study, and sometimes the participants, do not know who has received the vaccine or the placebo until the study has finished. This eliminates bias.
Results are published in the public domain. For example, vaccine trial data for COVID-19, human papilloma virus and rotavirus is available for anyone to access.
Vaccine manufacturers are liable for injury or death
Kennedy’s lawsuit against Merck contradicts his insistence that vaccine manufacturers are fully immune from litigation.
His claim is based on an incorrect interpretation of the National Vaccine Injury Compensation Program, or VICP. VICP is a no-fault federal program created to reduce frivolous lawsuits against vaccine manufacturers, which threaten to cause vaccine shortages and a resurgence of vaccine-preventable disease.
A person claiming injury from a vaccine can petition the U.S. Court of Federal Claims through the VICP for monetary compensation. If the VICP petition is denied, the claimant can then sue the vaccine manufacturer.
The majority of cases resolved under the VICP end in a negotiated settlement between parties without establishing that a vaccine was the cause of the claimed injury. Kennedy and his law firm have incorrectly used the payouts under the VICP to assert that vaccines are unsafe.
The VICP gets the vaccine manufacturer off the hook only if it has complied with all requirements of the Federal Food, Drug and Cosmetic Act and exercised due care. It does not protect the vaccine maker from claims of fraud or withholding information regarding the safety or efficacy of the vaccine during its development or after approval.
Good nutrition and sanitation are not substitutes for vaccination
Kennedy asserts that populations with adequate nutrition do not need vaccines to avoid infectious diseases. While it is clear that improvements in nutrition, sanitation, water treatment, food safety and public health measures have played important roles in reducing deaths and severe complications from infectious diseases, these factors do not eliminate the need for vaccines.
After World War II, the U.S. was a wealthy nation with substantial health-related infrastructure. Yet, Americans reported an average of 1 million cases per year of now-preventable infectious diseases.
Vaccines introduced or expanded in the 1950s and 1960s against diseases like diphtheria, pertussis, tetanus, measles, polio, mumps, rubella and Haemophilus influenza type B have resulted in the near or complete eradication of those diseases.
It’s easy to forget why many infectious diseases are rarely encountered today. The success of vaccines does not always tell its own story. It must be retold again and again to counter misinformation.
Researchers have been attaching tags to the foreheads of seals for the past two decades to collect data in remote and inaccessible regions. A researcher tags the seal during mating season, when the marine mammal comes to shore to rest, and the tag remains attached to the seal for a year.
A researcher glues the tag to the seal’s head – tagging seals does not affect their behavior. The tag detaches after the seal molts and sheds its fur for a new coat each year.
The tag collects data while the seal dives and transmits its location and the scientific data back to researchers via satellite when the seal surfaces for air.
First proposed in 2003, seal tagging has grown into an international collaboration with rigorous sensor accuracy standards and broad data sharing. Advances in satellite technology now allow scientists to have near-instant access to the data collected by a seal.
New scientific discoveries aided by seals
The tags attached to seals typically carry pressure, temperature and salinity sensors, all properties used to assess the ocean’s rising temperatures and changing currents. The sensors also often contain chlorophyll fluorometers, which can provide data about the water’s phytoplankton concentration.
Phytoplankton are tiny organisms that form the base of the oceanic food web. Their presence often means that animals such as fish and seals are around.
The seal sensors can also tell researchers about the effects of climate change around Antarctica. Approximately 150 billion tons of ice melts from Antarctica every year, contributing to global sea-level rise. This melting is driven by warm water carried to the ice shelves by oceanic currents.
With the data collected by seals, oceanographers have described some of the physical pathways this warm water travels to reach ice shelves and how currents transport the resulting melted ice away from glaciers.
Seals regularly dive under sea ice and near glacier ice shelves. These regions are challenging, and can even be dangerous, to sample with traditional oceanographic methods.
Across the open Southern Ocean, away from the Antarctic coast, seal data has also shed light on another pathway causing ocean warming. Excess heat from the atmosphere moves from the ocean surface, which is in contact with the atmosphere, down to the interior ocean in highly localized regions. In these areas, heat moves into the deep ocean, where it can’t be dissipated out through the atmosphere.
In fronts, the ocean’s circulation creates turbulence and mixes water in a way that brings nutrients up to the ocean’s surface, where phytoplankton can use them. As a result, fronts can have phytoplankton blooms, which attract fish and seals.
Scientists use the tag data to see how seals are adapting to a changing climate and warming ocean. In the short term, seals may benefit from more ice melt around the Antarctic continent, as they tend to find more food in coastal areas with holes in the ice. Rising subsurface ocean temperatures, however, may change where their prey is and ultimately threaten seals’ ability to thrive.
Seals have helped scientists understand and observe some of the most remote regions on Earth. On a changing planet, seal tag data will continue to provide observations of their ocean environment, which has vital implications for the rest of Earth’s climate system.
Novo Nordisk — the pharmaceutical giant behind popular weight-loss drugs Ozempic and Wegovy — spent a record $3.2 million on lobbying in the first six months of 2024 as the Denmark-based company expanded its footprint in the United States.
In 2017, after two years of clinical trials, the Food and Drug Administration approved Novo Nordisk’s injectable weight-loss drug Ozempic strictly for adults with Type 2 diabetes. Four years later, the FDA approved Wegovy, another weight-loss drug that is not strictly for type 2 patients but contains the same active ingredient as Ozempic, semaglutide.
Although Ozempic was originally approved for patients with diabetes, some non-diabetics buy it for the purpose of general weight loss under “off-label” prescriptions. The popularity of these prescriptions has contributed to a shortage of Ozempic in the United States, leaving it out of the hands of those who need it most.
An estimated 15.5 million Americans, or 6% of the U.S. population, have reported using injectable weight-loss drugs, according to a Gallup poll released in May. These drugs rose in popularity in 2023 as Novo Nordisk launched an aggressive advertising campaign, spending a total of $471 million to market Ozempic and Wegovy in one year.
In 2023, Novo Nordisk and its U.S. subsidiary, Novozymes North America, spent over $5 million on lobbying, hiring a whopping 77 lobbyists across 13 firms. This marked a 51% increase from the number of lobbyists hired in 2022. Of those, 54 previously held government jobs, bringing insider knowledge and industry connections to each role.
In addition to its lobbying efforts, Novo Nordisk has also been actively making campaign contributions in the U.S., spending over $497,000 between its PAC, employees, and executives in the 2023-2024 cycle, as of July 16.
Novo Nordisk currently charges around $1,000 for a month’s supply of Ozempic injections. The high cost of this medicine has been criticized for squeezing low-income patients with diabetes out of the market for life-changing drugs.
Medicare only covers Ozempic when it is used to treat patients with diabetes. Similarly, Wegovy is only covered for patients at cardiovascular risk. Yet, when used for general weight loss, Medicare does not cover the cost of Ozempic or Wegovy.
Novo Nordisk hired a law firm, Arnold & Porter, to lobby for Ozempic to be covered by Medicare as more and more Americans became customers in 2023.
Sen. Bernie Sanders (I-Vt.) argues that the high price of these weight-loss drugs has the power to bankrupt the Medicaid system. In June, Sen. Sanders threatened to subpoena Novo Nordisk CEO Lars Fruergaard Jorgensen, criticizing Novo Nordisk’s high American price tag on Ozempic when it is significantly lower in other countries.
“The American people are sick and tired of paying, by far, the highest prices in the world for prescription drugs. Novo Nordisk currently charges Americans with Type 2 diabetes $969 a month for Ozempic, while this same exact drug can be purchased for just $155 in Canada and just $59 in Germany.”
Jorgensen voluntarily agreed to testify in a Senate Committee on Health, Education, Labor and Pensions hearing in September. The name of the hearing: “Why Is Novo Nordisk Charging Americans with Diabetes and Obesity Outrageously High Prices for Ozempic and Wegovy?”
OpenAI on Thursday said it was putting its artificial intelligence engine to work in a challenge to Google's market-dominating search engine.
The startup behind ChatGPT announced that it is testing a "SearchGPT" prototype that is "designed to combine the strength of our AI models with information from the web" to answer online queries quickly and to provide relevant sources.
SearchGPT is being made available to a small group of users and publishers to get feedback, OpenAI said in a blog post.
Search features refined in the prototype will be woven into ChatGPT in the future, according to the San Francisco-based company.
Users will be able to interact with SearchGPT through conversational queries, and can ask follow-up questions as they might if speaking to a person, OpenAI said.
Google recently added AI-generated query result summaries -- referred to as "Overviews" -- to its search engine, causing worries among some that the move would result in fewer opportunities to serve up money-making ads.
This new feature offers written text at the top of results for Google searches, ahead of the traditional links to sites, which summarizes information that the engine believes answers the user's search query.
OpenAI's description of SearchGPT sounded similar to Google's Overviews.
Since the release of ChatGPT at the end of 2022, companies in the sector have been engaged in a frantic race to deploy generative AI programs for producing text, images and other content through prompts in everyday language.
"We are innovating at every layer of the AI stack," Google chief Sundar Pichai said this week during an earnings call for parent company Alphabet, which he also heads.
OpenAI said it was working with some publishers to refine SearchGPT, which is being kept separate from the training of its generative AI foundation models.
"AI search is going to become one of the key ways that people navigate the internet, and it's crucial, in these early days, that the technology is built in a way that values, respects, and protects journalism and publishers," The Atlantic chief executive Nicholas Thompson said in the OpenAI blog post.
"We look forward to partnering with OpenAI in the process."
OpenAI has invited users to sign up on a waitlist to try SearchGPT.
SpaceX's stalwart Falcon 9 rocket has been cleared for launch after experiencing a rare failure earlier this month, officials said Thursday.
The rocket, a prolific launch vehicle that propels both satellites and astronauts into orbit, experienced an anomaly during a launch on July 11 in its second stage booster that meant it failed to deploy 20 Starlink satellites at a high enough altitude, and all burnt up on re-entry through Earth's atmosphere.
"During the first burn of Falcon 9's second stage engine, a liquid oxygen leak developed within the insulation around the upper stage engine," Elon Musk's company said in a statement.
"The cause of the leak was identified as a crack in a sense line for a pressure sensor attached to the vehicle's oxygen system."
After investigating the mishap, the Federal Aviation Administration said it had determined "no public safety issues were involved in the anomaly" and that the Falcon 9 vehicle "may return to flight operations while the overall investigation remains open."
The last time a Falcon 9 experienced a serious incident was in September 2016, when one blew up on the launchpad.
And in June 2015, the second stage of a Falcon 9 disintegrated two minutes after lift-off, resulting in the loss of important equipment bound for the International Space Station.
The new mishap notably came as the first crew of Boeing's problem-plagued Starliner spaceship are stuck waiting for ground teams to give a green light for them to return from the ISS.
With Falcon 9 cleared, the next scheduled resupply of the orbiting outpost in early August can now take place as planned, using a Northrop Grumman Cygnus cargo ship.
When milliseconds can mean the difference between silver and gold, endurance athletes in sports like marathon running, cycling, rowing and swimming optimize every aspect of their physiology for a competitive edge.
But there is another aspect of endurance training that may have largely been overlooked by athletes and trainers – the role of the gut microbiome in optimizing your mitochondrial health and fitness.
The gut microbiome, a hidden factory of highly collaborative microorganisms in your intestines, ensures that your metabolism, immune system and brain run smoothly. Some researchers liken it to another organ that senses nutritional inputs, manufactures signaling molecules and prepares your body to respond appropriately.
Research has shown that endurance athletes have different gut microbiomes compared with the general population. Their microbiome’s composition and function, like increased production of a short-chain fatty acid called butyrate, are associated with increased VO2 max, a fitness benchmark that measures your ability to consume oxygen during intense exercise. One organism in particular, Veillonella is found in some elite runners and may help raise lactate threshold, a fitness metric closely linked to mitochondrial function and how long an athlete is able to sustain intense effort.
Mitochondria are more than just the powerhouses of the cell.
Some of these metabolites – butyrate, conjugated linoleic acid and urolithin A among them – have been shown to specifically improve muscle strength and endurance. Combining exercise with diets high in fiber, polyphenols – a chemical compound from plants – and healthy fats may thus augment mitochondrial fitness and improve exercise performance.
The drinks, shakes, bars and gels used for endurance sports are processed foods formulated to provide concentrated and accessible energy during intense exercise. While unhealthy in other contexts, they can be key for enhancing performance during long endurance events when your body depletes its own version of accessible carbohydrates called glycogen.
But it’s important to complement these energy supplements with a healthy diet in the recovery hours following exercise. The combination of an unhealthy baseline diet with high-intensity exercise could compromise your gut barrier and increase inflammation. This has been linked to various training-related issues including gastrointestinal upset, musculoskeletal injuries and respiratory illnesses.
Performance-enhancing microbes
Reintroducing a diet rich in foods that positively affect your microbiome — beans, nuts, seeds, whole grains, fruits and vegetables — during the recovery phase of training can help most people prevent the adverse effects of high-intensity exercise and optimize performance.
However, due to antibiotic misuse and processed diets, some people lack key microbes and metabolic machinery needed to convert fibers and polyphenols into useful molecules the body can use. This shortage may explain why some healthy foods and diets might not be beneficial or tolerated by everyone.
While the benefits of nutrition targeting your microbiome and mitochondria for general health are increasingly clear, this approach is still in the early days of exploration in endurance sports.
For the occasional exerciser and weekend warrior, whole nutrition strategies that support the microbiome and mitochondria could be quite helpful. These strategies have the potential to improve performance, protect against adverse training effects and prevent chronic health conditions like obesity, cancer and Alzheimer’s disease.
For elite athletes seeking even the smallest of improvements in an already finely tuned training regimen, further research into the gut microbiome’s influence on performance might be invaluable. In a highly competitive field where nothing can be left off the table — or in the cupboard — such interventions might just be the deciding factor between finishing on the podium or off it.
For many people, the aroma of freshly brewed coffee is the start of a great day. But caffeine can cause headaches and jitters in others. That’s why many people reach for a decaffeinated cup instead.
I’m a chemistry professor who has taught lectures on why chemicals dissolve in some liquids but not in others. The processes of decaffeination offer great real-life examples of these chemistry concepts. Even the best decaffeination method, however, does not remove all of the caffeine – about 7 milligrams of caffeine usually remain in an 8-ounce cup.
Producers decaffeinating their coffee want to remove the caffeine while retaining all – or at least most – of the other chemical aroma and flavor compounds. Decaffeination has a rich history, and now almost all coffee producers use one of three common methods.
In the relatively new carbon dioxide method, developed in the early 1970s, producers use high-pressure CO₂ to extract caffeine from moistened coffee beans. They pump the CO₂ into a sealed vessel containing the moistened coffee beans, and the caffeine molecules dissolve in the CO₂.
Once the caffeine-laden CO₂ is separated from the beans, producers pass the CO₂ mixture either through a container of water or over a bed of activated carbon. Activated carbon is carbon that’s been heated up to high temperatures and exposed to steam and oxygen, which creates pores in the carbon. This step filters out the caffeine, and most likely other chemical compounds, some of which affect the flavor of the coffee.
These compounds either bind in the pores of the activated carbon or they stay in the water. Producers dry the decaffeinated beans using heat. Under the heat, any remaining CO₂ evaporates. Producers can then repressurize and reuse the same CO₂.
This method, which requires expensive equipment for making and handling the CO₂, is extensively used to decaffeinate commercial-grade, or supermarket, coffees.
Initially, producers soak a batch of green coffee beans in hot water, which extracts both the caffeine and other chemical compounds from the beans.
It’s kind of like what happens when you brew roasted coffee beans – you place dark beans in clear water, and the chemicals that cause the coffee’s dark color leach out of the beans into the water. In a similar way, the hot water pulls the caffeine from not yet decaffeinated beans.
During the soaking, the caffeine concentration is higher in the coffee beans than in the water, so the caffeine moves into the water from the beans. Producers then take the beans out of the water and placed them into fresh water, which has no caffeine in it – so the process repeats, and more caffeine moves out of the beans and into the water. The producers repeat this process, up to 10 times, until there’s hardly any caffeine left in the beans.
The resulting water, which now contains the caffeine and any flavor compounds that dissolved out from the beans, gets passed through activated charcoal filters. These trap caffeine and other similarly sized chemical compounds, such as sugars and organic compounds called polyamines, while allowing most of the other chemical compounds to remain in the filtered water.
Producers then use the filtered water – saturated with flavor but devoid of most of the caffeine – to soak a new batch of coffee beans. This step lets the flavor compounds lost during the soaking process reenter the beans.
This animation shows the steps to the Swiss water process.
This traditional and most common approach, first done in the early 1900s, uses organic solvents, which are liquids that dissolve organic chemical compounds such as caffeine. Ethyl acetate and methylene chloride are two common solvents used to extract caffeine from green coffee beans. There are two main solvent-based methods.
In the direct method, producers soak the moist beans directly in the solvent or in a water solution containing the solvent.
The solvent extracts most of the caffeine and other chemical compounds with a similar solubility to caffeine from the coffee beans. The producers then remove the beans from the solvent after about 10 hours and dry them.
In the indirect method, producers soak the beans in hot water for a few hours and then take them out. They then treat the water with solvent to remove caffeine from the water. Methylene chloride, the most common solvent, does not dissolve in the water, so it forms a layer on top of the water. The caffeine dissolves better in methylene chloride than in water, so most of the caffeine stays up in the methylene chloride layer, which producers can separate from the water.
As in the Swiss water method, the producers can reuse the “caffeine-free” water, which may return some of the flavor compounds removed in the first step.
One of the common solvents, ethyl acetate, comes naturally in many foods and beverages. It’s considered a safe chemical for decaffeination by the Food and Drug Administration.
The FDA and the Occupational Safety and Health Administration have deemed methylene chloride unsafe to consume at concentrations above 10 milligrams per kilogram of your body weight. However, the amount of residual methylene chloride found in roasted coffee beans is very small – about 2 to 3 milligrams per kilogram. It’s well under the FDA’s limits.
OSHA and its European counterparts have strict workplace rules to minimize methylene chloride exposure for workers involved in the decaffeination process.
After producers decaffeinate coffee beans using methylene chloride, they steam the beans and dry them. Then the coffee beans are roasted at high temperatures. During the steaming and roasting process, the beans get hot enough that residual methylene chloride evaporates. The roasting step also produces new flavor chemicals from the breakdown of chemicals into other chemical compounds. These give coffee its distinctive flavor.
Plus, most people brew their coffee at between 190 F to 212 F, which is another opportunity for methylene chloride to evaporate.
Retaining aroma and flavor
It’s chemically impossible to dissolve out only the caffeine without also dissolving out other chemical compounds in the beans, so decaffeination inevitably removes some other compounds that contribute to the aroma and flavor of your cup of coffee.
But some techniques, like the Swiss water process and the indirect solvent method, have steps that may reintroduce some of these extracted compounds. These approaches probably can’t return all the extra compounds back to the beans, but they may add some of the flavor compounds back.
Thanks to these processes, you can have that delicious cup of coffee without the caffeine – unless your waiter accidentally switches the pots.
Cheese is a relatively simple food. It’s made with milk, enzymes – these are proteins that can chop up other proteins – bacterial cultures and salt. Lots of complex chemistry goes into the cheesemaking process, which can determine whether the cheese turns out soft and gooey like mozzarella or hard and fragrant like Parmesan.
In fact, humans have been making cheese for about 10,000 years. Roman soldiers were given cheese as part of their rations. It is a nutritious food that provides protein, calcium and other minerals. Its long shelf life allows it to be transported, traded and shipped long distances.
I am a food scientist at the University of Wisconsin who has studied cheese chemistry for the past 35 years.
In the U.S., cheese is predominantly made with cow’s milk. But you can also find cheese made with milk from other animals like sheep, goats and even water buffalo and yak.
Unlike with yogurt, another fermented dairy product, cheesemakers remove whey – which is water – to make cheese. Milk is about 90% water, whereas a cheese like cheddar is less than about 38% water.
Removing water from milk to make cheese results in a harder, firmer product with a longer shelf life, since milk is very perishable and spoils quickly. Before the invention of refrigeration, milk would quickly sour. Making cheese was a way to preserve the nutrients in milk so you could eat it weeks or months in the future.
How is cheese made?
All cheesemakers first pump milk into a cheese vat and add a special enzyme called rennet. This enzyme destabilizes the proteins in the milk – the proteins then aggregate together and make a gel. The cheesemaker is essentially turning milk from a liquid into a gel.
After anywhere from 10 minutes to an hour, depending on the type of cheese, the cheesemaker cuts this gel, typically into cubes. Cutting the gel helps some of the whey, or water, separate from the cheese curd, which is made of aggregated milk and looks like a yogurt gel. Cutting the gel into cubes lets some water escape from the newly cut surfaces through small pores, or openings, in the gel.
The cheesemaker’s goal is to remove as much whey and moisture from the curd as they need to for their specific recipe. To do so, the cheesemaker might stir or heat up the curd, which helps release whey and moisture. Depending on the type of cheese made, the cheesemaker will drain the whey and water from the vat, leaving behind the cheese curds.
Wisconsin Master Cheesemaker Gary Grossen cuts a vat of cheese with a cheese harp during a cheesemaking short course at the Center for Dairy Research in Madison, Wis. Cutting helps release whey during the cheesemaking process. UW Center for Dairy Research
For a harder cheese like cheddar, the cheesemaker adds salt directly to the curds while they’re still in the vat. Salting the curds expels more whey and moisture. The cheesemaker then packs the curds together in forms or hoops – these are containers that help shape the curds into a block or wheel and hold them there – and places them under pressure. The pressure squeezes the curds in these hoops, and they knit together to form a solid block of cheese.
Cheesemakers salt other cheeses, like mozzarella, by placing them in a salt solution called a brine. The cheese block or wheel floats in a brine tank for hours, days or even weeks. During that time, the cheese absorbs some of the salt, which adds flavor and protects against unwanted bacterial or pathogen growth.
While the cheesemaker is completing all these steps, several important bacterial processes are occurring. The cheesemaker adds cheese cultures, which are bacteria they choose that produce specific flavors, at the beginning of the process. Adding them to the milk while it is still liquid gives the bacteria time to ferment the lactose in the milk.
Historically, cheesemakers used raw milk, and the bacteria in the raw milk soured the cheese. Now, cheesemakers use pasteurization, a mild heat treatment that destroys any pathogens present in the raw milk. But using this treatment means the cheesemakers need to add back in some bacteria called starters – these “start” the fermentation process.
Pasteurization provides a more controlled process for the cheesemaker, as they can select specific bacteria to add, rather than whatever is present in the raw milk.
Essentially, these bacteria eat (ferment) the sugar – the lactose – and in doing so produce lactic acid, as well as other desirable flavor compounds in the cheese like diacetyl, which smells like hot buttered popcorn.
In some types of cheese, these cultures stay active in the cheese long after it leaves the cheese vat. Many cheesemakers age their cheeses for weeks, months or even years to give the fermentation process more time to develop the desired flavors. Aged cheeses include Parmesan, aged cheddars and Gouda.
A Wisconsin cheesemaker inspects a wheel of Parmesan in the aging room. Aging is an important step in the production of many cheeses, as it allows for flavor development. The Dairy Farmers of Wisconsin
In essence, cheesemaking is a milk concentration process. Cheesemakers want their final product to have the milk proteins, fat and nutrients, without as much of the water. For example, the main milk protein that is captured in the cheesemaking process is casein. Milk might contain about 2.5% casein content, but a finished cheese like cheddar may contain about 25% casein (protein). So cheese contains lots of nutrients including protein, calcium and fat.
Infinite possibilities with cheese
There are hundreds of different varieties of cow’s milk cheese made across the globe, and they all start with milk. All of these different varieties are produced by adjusting the cheesemaking process.
For some cheeses, like Limburger, the cheesemaker rubs a smear – a solution containing various types of bacteria – on the cheese’s surface during the aging process. For others, like Camembert, the cheesemaker places the cheese in an environment (e.g., a cave) that encourages mold growth.
Others like bandaged cheddar are wrapped with bandages or covered with ash. Adding a bandage or ash onto the cheese’s surface helps protect it from excessive mold growth, and it reduces the amount of moisture lost to evaporation. This creates a harder cheese with stronger flavors.
Wisconsin Master Cheesemaker Joe Widmer in his brick cheese aging room. Brick cheese is a smear-ripened cheese – it is produced by applying a salt solution to the exterior of the cheese as it ages. Dairy Farmers of Wisconsin
Over the past 60 years, cheesemakers have figured out how to select the right bacterial cultures to make cheese with specific flavors and textures. The possibilities are endless, and there’s no limit to the cheesemaker’s imagination.