Mick Mulvaney and John Bolton (Photos: Screen capture)
White House acting chief of staff Mick Mulvaney attempted to avoid testifying before Congress this week when he joined a lawsuit brought by John Bolton's former deputy Charles Kupperman. The lawsuit is asking the courts to determine which takes precedence: a Congressional subpoena or a White House's demand not to comply with the subpoena.
According to the Washington Post, some close to Bolton and Kupperman said that both men "were flabbergasted" that Mulvaney wanted to join the lawsuit "because they and others on the national security team considered Mulvaney a critical player in the effort to get the Ukrainian government to pursue investigations into Trump's political opponents."
Bolton reportedly sees Mulvaney as someone actively participating in Trump's campaign to pressure witnesses from testifying about the Ukraine scandal. The former Director of National Intelligence once called the scandal as "a drug deal" and Trump's lawyer Rudy Giuliani "a drug dealer," his previous aides testified.
White House officials said that Bolton and Mulvaney weren't even speaking when Bolton left.
Mulvaney's lawyer, William Pittard, told The Post that Mulvaney is trying to reconcile the separation of powers the White House has fought.
"As acting chief of staff, Mr. Mulvaney intends to follow any lawful order of the president and has no reason to think that the order at issue is unlawful — other than the fact the House has threatened him with charges of contempt and obstruction for following it," Pittard told The Post.
The report cited Laurence Tribe, a constitutional law expert, who said that Mulvaney is likely trying to "shield" himself "from having to obey his legal duty to comply with an obviously valid subpoena."
Our new study published in Nature Geoscience on an ancient chain of Australian volcanoes is helping to change our understanding of “hotspot” volcanism.
You may be surprised to learn eastern Australia hosts the longest chain of continental hotspot volcanoes on Earth. These volcanoes erupted during the last 35 million years (for 1 to 7 million years each), as the Australian continent moved over an area of heat (a hotspot) inside the planet, also known as a fixed heat anomaly or mantle plume.
But it appears the Australian hotspot waned with time. And we have found the volcanoes’ inner structure and eruptions changed as a result. Our new findings show hotspot strength has key impacts on the evolution of volcanoes’ inner structure, along with their location and lifespan.
Hotspots change Earth’s surface
Hotspot volcanoes can produce very large volumes of lava and have an important role in Earth’s evolution and atmosphere. Today, famously active hotspot volcanoes include the Hawaiian volcanoes in the Pacific Ocean and the Canary Islands in the Atlantic Ocean. These are known as ocean island volcanoes.
The Australian hotspot chain provides a continental perspective and covers the life cycle of a hotspot – a unique opportunity to better understand how hotspot volcanoes work, why they erupt, and how they evolve with time.
We found the strength of the hotspot and magma supply controls the duration, make-up and explosiveness of volcanoes at the surface. Around 35 to 27 million years ago, the early Australian hotspot was strong and generated enormous, long-lasting volcanoes across Queensland where magma (molten rock) took a direct route to the surface.
In contrast, the more recent (20 to 6 million years ago) New South Wales volcanoes are smaller and had shorter lifetimes, suggesting the hotspot lost strength with time. Interestingly, reduced supply made the magma’s journey to the surface more complicated, with many stops (magma chambers) and more explosive eruptions.
The tipping point occurred at the stunning Tweed-Wollumbin (Mount Warning) volcanic landscape, which formed 21–24 million years ago at today’s border between Queensland and New South Wales.
A view of the volcanic Tweed Valley with Wollumbin (Mount Warning) in the foreground. Jiri Viehmann/Shutterstock
The secret journey of magma
To discover the journey of magma inside the volcano, and the stops it made on its way to eruption, we analysed volcanic crystals. These are the little heroes that make it all the way to the surface. Mainly composed of silicate minerals like olivine, pyroxene and plagioclase, the crystals grow in the guts of the volcano at high temperature, and register what happens before eruptions start.
These crystals are quite simple in northern volcanoes like Buckland in Queensland, which means they travel through few, simple magma chambers. In contrast, the crystals become very complex in southern volcanoes like Nandewar and Warrumbungle in New South Wales, which means they had a complicated journey through lots of busy magma chambers – lots of stops.
Importantly, when magma stops in a chamber, it cools down and becomes more viscous and difficult to erupt – a bit like cold toothpaste, instead of hot coffee. This thick, lazy magma may need new, hotter magma (caffeinated!) to come and push it to erupt.
If that happens, the gases trapped in the colder magma may not be able to escape, since the magma is so thick. This results in a pressure buildup, eventually exploding like a shaken bottle of fizzy drink – an explosive volcanic eruption.
A special clock
The cold and hardened lava flows we see in the form of volcanic rocks contain a special clock – radioactive chemical elements have slowly broken down into stable daughter products that accumulate and increase in concentration as time passes.
The beauty of this process is that we know how fast it occurs. By measuring the ratio of the radioactive element and its stable daughter product we can calculate the age of a volcanic rock. By measuring the age of each lava flow from the bottom to the top of the volcano, we can measure its lifetime.
Our study shows the relevance of Australian volcanoes, even if mostly extinct, in better understanding eruptions that have shaped the evolution of our planet. We demonstrate the fundamental role of hotspot strength and magma supply on Earth’s landscape, as well as the eruption styles and lifetimes of volcanoes.
This breakthrough makes it possible to visualize the inner structure of hotspot volcanoes, and their evolution, uniquely easily accessible in the ancient, exposed Australian landscape.
Since it was founded in 2016, Elon Musk’s brain-computer interface (BCI) company Neuralink has had its moments in biotech news.
Whether it was the time Musk promised his “link” would let people communicate telepathically, or when the whole company was under investigation for potentially violating the Animal Welfare Act, the hype around Neuralink means it’s often the first mental reference people have for BCI technology.
But BCIs have been kicking around for much longer than you’d expect. Musk’s is just one in a growing list of companies dedicated to advancing this technology. Let’s take a look back at some BCI milestones over the past decades, and forward to where they might lead us.
An expanding sector
Brain-computer interfaces are devices that connect the brain with a computer to allow the user to complete some kind of action using their brain signals.
Many high-profile companies entered the BCI field in the 2010s, backed by millions of dollars in investment. Founded in 2016, the American company Kernel began by researching implantable devices, before switching to focus on non-invasive techniques that don’t require surgery.
Even Facebook gave BCIs a go, with an ambitious plan to create a headset that would let users type 100 words per minute. But it stopped this research in 2021 to focus on other types of human-computer interfaces.
Developed in the 1970s, the earliest BCIs were relatively straightforward, used on cats and other animals to develop communication pathways. The first device implanted in a human was developed by Jonathan Wolpaw in 1991, and allowed its user to control a cursor with their brain signals.
Advances in machine learning through the years paved the way for more sophisticated BCIs. These could control complex devices, including robotic limbs, wheelchairs and exoskeletons. We’ve also seen devices get progressively smaller and easier to use thanks to wireless connectivity.
Like many newer BCI devices, Neuralink has yet to receive approval for clinical trials of its invasive implant. Its latest application to the US Food and Drug Administration was rejected.
There are, however, three notable groups conducting clinical trials that are worth keeping an eye on.
Founded in 1998 in Massachusetts, the BrainGate system has been around since the late 1990s. This makes it one of the oldest advanced BCI implant systems. Its device is placed in the brain using microneedles, similar to the technology Neuralink uses.
BrainGate’s devices are probably the most advanced when it comes to BCI functionality. One of its wired devices offers a typing speed of 90 characters per minute, or 1.5 characters per second. A study published in January released results from data collected over 17 years from 14 participants.
During this time there were 68 instances of “adverse events” including infection, seizures, surgical complications, irritation around the implant, and brain damage. However, the most common event was irritation. Only six of the 68 incidents were considered “serious”.
Apart from communication applications, BrainGate has also achieved robotic control for self-feeding.
2. UMC Utrecht
The University Medical Centre in Utrecht, Netherlands, was the first to achieve fully wireless implanted BCI technology that patients could take home.
Its device uses electrocorticography-based BCI (ECoG). Electrodes in the form of metal discs are placed directly on the surface of the brain to receive signals. They connect wirelessly to a receiver, which in turn connects to a computer.
Participants in a clinical trial that ran from 2020 to 2022 were able to take the device home and use it every day for about a year. It allowed them to control a computer screen and type at a speed of two characters per minute.
While this typing speed is slow, future versions with more electrodes are expected to perform better.
3. Synchron (originally SmartStent)
Synchron was founded in 2016 in Melbourne, Australia. In 2019, it became the first company to be approved for clinical trials in Australia. Then in 2020 it became the first company to receive FDA approval to run clinical trials using a permanently implanted BCI – and finally did this with a US patient this year.
Synchron’s approach is to bypass full brain surgery by using blood vessels to implant electrodes in the brain. This minimally invasive approach is similar to other stenting procedures routinely performed in clinics.
Synchron’s very small ‘stentrode’ can be implanted with a minimally invasive procedure. Synchron
Synchron’s device is placed in the brain near the area that controls movement, and a wireless transmitter is placed in the chest. This transmitter then conveys brain signals to a computer.
Initial clinical results have shown no adverse effects and a functionality of 14 characters per minute using both the BCI and eye tracking. Results were not reported for BCI use alone.
Although its device efficiency could be improved, Synchron’s approach means it leads the way in achieving a low barrier for entry. By avoiding the need for full brain surgery, it’s helping to bring BCI implantation closer to being a day procedure.
The benefits must outweigh the risks
The history of BCIs reveals the immense challenges involved in developing this technology. These are compounded by the fact that experts still don’t fully understand the links between our neural circuitry and thoughts.
It’s also unclear which BCI features consumers will prioritize moving forward, or what they’d be willing to sign up for. Not everyone will happily opt for an invasive brain procedure – yet the systems that don’t require this collect “noisy” data that aren’t as efficient.
Electroencephalogram-based (EEG) BCIs don’t require surgery, but being less invasive means they’re also less effective. Shutterstock
Answers will emerge as more devices gain approval for clinical trials and research is published on the results.
Importantly, developers of these technologies must not rush through trials. They have a responsibility to be transparent about the safety and efficacy of their devices, and to report on them openly so consumers can make informed decisions.
Imagine what life would be like if you couldn’t recognize your own family and friends unless they told you who they were. Now imagine no one will believe you and that even your doctor dismisses you, saying everyone forgets names sometimes.
Two recent studies show this is a common experience for people with a brain disorder called “developmental prosopagnosia” – or as it is more informally known, faceblindness. This type of prosopagnosia is lifelong, in contrast to “acquired prosopagnosia” which can develop after a brain injury. Sufferers struggle to recognize people who they know well and, in extreme cases, close family members and even photographs of themselves.
One of the studies, published in December 2022, comes from my lab at Edge Hill University. Our results suggested that up to 85% of people with faceblindness would not get diagnosed if they tried traditional approaches. For example, if participants complained to their doctor that they were failing to recognize friends and family members, they were often told their face recognition skills were normal. This can have a terrible impact on people, leaving them confused, frustrated and upset.
Researchers at Harvard University published a paper in February 2023 that came to the same conclusion: many people with faceblindness won’t get a diagnosis from their clinician using current medical assessments. The current procedure requires people to score worse than 97.5% of the general public on both of two computer-based tests.
Drawing a blank
The first of these tasks is a “famous faces” test, where patients have to identify celebrities from their photographs (for example, Brad Pitt or Bill Clinton). In the second task, patients are asked to memorize a series of unfamiliar faces, then pick them out from a larger group – similar to how you would identify a criminal suspect in a police line-up.
This is the most common approach used by clinicians and researchers across Europe, North America and Australasia. However, the Harvard research and that by my own lab found that many prosopagnosia cases would not meet the criteria currently required for a diagnosis.
Our study tested 61 people who reported daily difficulties recognizing faces. Assessments were carried out online due to COVID-19 restrictions, and we found that 85% of participants would not have met the diagnosis threshold on the computer tests. The Harvard study suggested that roughly 60-70% of people who struggle to recollect faces may be denied a diagnosis.
Why do people with prosopagnosia perform too well on medical tests to get a diagnosis? One reason may be because of day-to-day changes in their ability to focus – for example, did they have a coffee this morning, or a good night’s sleep? Previous research has shown prosopagnosics’ scores on face tests change from one testing session to the next.
Computer-based tests may also be missing something about how we recognize faces in person. In the real world, we see faces in three dimensions, and they are moving as someone walks towards us and speaks. The current tests only use still images in two dimensions.
A different result
So, how should we diagnose prosopagnosia instead? While the Harvard group and I agree that we need to be much more understanding towards people who believe they have the condition, we differ in our views on how this should be accomplished.
The Harvard lab proposes we should diagnose people with prosopagnosia if they score in the bottom 16% of the general population on the two face recognition tests. One problem with this approach is that it will still block many people who report trouble with faces from getting help.
I would argue we should be guided by the patient’s symptoms when deciding on a diagnosis. Symptoms can be assessed by asking people how strongly they agree with statements like “I often mistake people I have met before for strangers”. These are taken from a questionnaire called the prosopagnosia index, first developed by a British research group in 2015.
This approach is used for other psychological conditions such as depression and post-traumatic stress disorder. Only with this method can we understand the range of the prosopagnosia spectrum, and avoid unnecessary suffering that comes with a lack of diagnosis.
The prosopagnosia index only takes a couple of minutes to administer, while computer based tests can take up to an hour. Diagnosing people more rapidly gives doctors more time to discuss options with their patients, such as computer training with faces and coping mechanisms. The latter includes telling friends and colleagues about your condition, and requesting they introduce themselves each time you meet.
Research in this area is ongoing so if you, or someone you know, thinks they might have prosopagnosia (either acquired or developmental) and would like to be tested, or you have failed to get a diagnosis in the past from a clinician, please consider taking part.
For those who might still be skeptical, I should add: faceblindness is a real disorder. People with this condition have atypical neural responses when they view faces. This suggests their brains are not functioning as they should be when they visualise faces.
If you happen to meet someone with faceblindness – and the chances are very high, given that one in 30 may have the condition – please be understanding. Give them cues as to who you are and where you met them. A little patience can make all the difference.