Florida Governor Rob DeSantis speaks at the University of Miami in 2019. (Shutterstock.com)
Florida Gov. Ron DeSantis is facing a state ethics complaint after allegedly accepting illegal gifts and donations.
The complaint was filed on Monday by Florida Democratic Chair Nikki Fried.
"Florida law prohibits Ron DeSantis or a member of his immediate family from soliciting or knowingly accepting, directly or indirectly, any gift from a political committee," the complaint states. "On February 21, 2023, Friends of Ron DeSantis paid $235,244.52 to the Four Seasons Resort Palm Beach... Media reports indicated that Ron DeSantis' top donors and supporters gathered at the Four Seasons Resort Palm Beach for a three-day retreat as part of an effort to increase his national profile in advance of an anticipated run for the Republican nomination as President of the United States in 2024."
According to the complaint, Friends of Ron DeSantis is a political action committee formed to promote the governor's anticipated presidential campaign.
The complaint claims that DeSantis flouted Florida law because Friends of Ron DeSantis spent $235,000 at Four Seasons Palm Beach, $142,000 at Four Seasons Miami, and $11,000 at Dirty French Steakhouse while soliciting support for his 2024 presidential campaign.
Health practitioners are increasingly concerned that because race is a social construct, and the biological mechanisms of how race affects clinical outcomes are often unknown, including race in predictive algorithms for clinical decision-making may worsen inequities.
Based on this algorithm, which was trained on actual GFR values from patients, a Black patient would be assigned a higher eGFR than a non-Black patient of the same age, sex and serum creatinine level. This implies that some Black patients would be considered to have healthier kidneys than otherwise similar non-Black patients and less likely to be assigned a kidney transplant.
Biased clinical algorithms can lead to inaccurate diagnoses and delayed treatment.
In 2021, however, researchers found that excluding race in the original eGFR equations could lead to larger discrepancies between estimated and actual GFR values for both Black and non-Black patients. They also found adding an additional biomarker called cystatin C can improve predictions. However, even with this biomarker, excluding race from the algorithm still led to elevated discrepanies across races.
Researchers use different economic frameworks to understand how society allocates resources. Two key frameworks are utilitarianism and equality of opportunity.
A purely utilitarian outlook seeks to identify what features would get the most out of a positive outcome or reduce the harm from a negative one, ignoring who possesses those features. This approach allocates resources to those with the most opportunities to generate positive outcomes or mitigate negative ones.
A utilitarian approach would always include race and ethnicity to improve the prediction power and accuracy of algorithms, regardless of whether it’s fair. For example, utilitarian policies would aim to maximize overall survival among people seeking organ transplants. They would allocate organs to those who would survive the longest from transplantation, even if those who may not survive the longest due to circumstances outside their control and need the organs most would die sooner without the transplant.
Although utilitarian approaches do not take fairness into account, an approach that does would ask two questions: How do we define fairness? Are there conditions when maximizing an algorithm’s prediction power and accuracy would not conflict with fairness?
To answer these questions, I apply the equality of opportunity framework, which aims to allocate resources in a way that allows everyone the same chance of obtaining similar outcomes, without being disadvantaged by circumstances outside of their control. Researchers have used this framework in many contexts, such as political science, economics and law. The U.S. Supreme Court has also applied equality of opportunity in several landmark rulings in education.
There are two fundamental principles in equality of opportunity.
First, inequality of outcomes is unethical if it results from differences in circumstances that are outside of an individual’s own control, such as the income of a child’s parents, exposure to systemic racism or living in violent and unsafe environments. This can be remedied by compensating individuals with disadvantaged circumstances in a way that allows them the same opportunity to obtain certain health outcomes as those who are not disadvantaged by their circumstances.
Second, inequality of outcomes for people in similar circumstances that result from differences in individual effort, such as practicing health-promoting behaviors like diet and exercise, is not unethical, and policymakers can reward those achieving better outcomes through such behaviors. However, differences in individual effort that occur because of circumstances, such as living in an area with limited access to healthy food, are not addressed under equality of opportunity. Keeping all circumstances the same, any differences in effort between individuals should be due to preferences, free will and perceived benefits and costs. This is called accountable effort. So, two individuals with the same circumstances should be rewarded according to their accountable efforts, and society should accept the resulting differences in outcomes.
Equality of opportunity implies that if algorithms were to be used for clinical decision-making, then it is necessary to understand what causes variation in the predictions they make.
If variation in predictions results from differences in circumstances or biological conditions but not from individual accountable effort, then it is appropriate to use the algorithm for compensation, such as allocating kidneys so everyone has an equal opportunity to live the same length of life, but not for reward, such as allocating kidneys to those who would live the longest with the kidneys.
In contrast, if variation in predictions results from differences in individual accountable effort but not from their circumstances, then it is appropriate to use the algorithm for reward but not compensation.
Evaluating clinical algorithms for fairness
To hold machine learning and other artificial intelligence algorithms accountable to a standard of equity, I applied the principles of equality of opportunity to
evaluate whether race should be included in clinical algorithms. I ran simulations under both ideal data conditions, where all data on a person’s circumstances is available, and real data conditions, where some data on a person’s circumstances is missing.
As a social construct, race is often a proxy for nonbiological circumstances.
I evaluated two categories of algorithms.
The first, diagnostic algorithms, makes predictions based on outcomes that have already occurred at the time of decision-making. For example, diagnostic algorithms are used to predict the presence of gallstones in patients with abdominal pain or urinary tract infections, or to detect breast cancer using radiologic imaging.
The second, prognostic algorithms, predicts future outcomes that have not yet occurred at the time of decision-making. For example, prognostic algorithms are used to predict whether a patient will live if they do or do not obtain a kidney transplant.
I found that, under an equality of opportunity approach, diagnostic models that do not take race into account would increase systemic inequities and discrimination. I found similar results for prognostic models intended to compensate for individual circumstances. For example, excluding race from algorithms that predict the future survival of patients with kidney failure would fail to identify those with underlying circumstances that make them more vulnerable.
Including race in prognostic models intended to reward individual efforts can also increase disparities. For example, including race in algorithms that predict how much longer a person would live after a kidney transplant may fail to account for individual circumstances that could limit how much longer they live.
Unanswered questions and future work
Better biomarkers may one day be able to better predict health outcomes than race and ethnicity. Until then, including race in certain clinical algorithms could help reduce disparities.
Although my study uses an equality of opportunity framework to measure how race and ethnicity affect the results of prediction algorithms, researchers don’t know whether other ways to approach fairness would lead to different recommendations. How to choose between different approaches to fairness also remains to be seen. Moreover, there are questions about how multiracial groups should be coded in health databases and algorithms.
My colleagues and I are exploring many of these unanswered questions to reduce algorithmic discrimination. We believe our work will readily extend to other areas outside of health, including education, crime and labor markets.
Can a computer learn from the past and anticipate what will happen next, like a human? You might not be surprised to hear that some cutting-edge AI models could achieve this feat, but what about a computer that looks a little different – more like a tank of water?
We have built a small proof-of-concept computer that uses running water instead of a traditional logical circuitry processor, and forecasts future events via an approach called “reservoir computing”.
In benchmark tests, our analogue computer did well at remembering input data and forecasting future events – and in some cases it even did better than a high-performance digital computer.
So how does it work?
Throwing stones in the pond
Imagine two kids, Alice and Bob, playing at the edge of a pond. Bob throws big and small stones into water one at a time, seemingly at random.
Big and small stones create water waves of different size. Alice watches the water waves created by the stones and learns to anticipate what the waves will do next – and from that, she can have an idea of which stone Bob will throw next.
Bob throws rocks into the pond, while Alice watches the waves and tries to predict what’s coming next.Yaroslav Maksymov, Author provided
Reservoir computers copy the reasoning process taking place in Alice’s brain. They can learn from past inputs to predict the future events.
Although reservoir computers were first proposed using neural networks – computer programs loosely based on the structure of neurons in the brain – they can also be built with simple physical systems.
Reservoir computers are analogue computers. An analogue computer represents data continuously, as opposed to digital computers which represent data as abruptly changing binary “zero” and “one” states.
Representing data in a continuous way enables analogue computers to model certain natural events – ones that occur in a kind of unpredictable sequence called a “chaotic time series” – better than a digital computer.
How to make predictions
To understand how we can use a reservoir computer to make predictions, imagine you have a record of daily rainfall for the past year and a bucket full of water near you. The bucket will be our “computational reservoir”.
We input the daily rainfall record to the bucket by means of stone. For a day of light rain, we throw a small stone; for a day of heavy rain, a big stone. For a day of no rain, we throw no rock.
Each stone creates waves, which then slosh around the bucket and interact with waves created by other stones.
At the end of this process, the state of the water in the bucket gives us a prediction. If the interactions between waves create large new waves, we can say our reservoir computer predicts heavy rains. But if they are small then we should expect only light rain.
It is also possible that the waves will cancel one another, forming a still water surface. In that case we should not expect any rain.
The reservoir makes a weather forecast because the waves in the bucket and rainfall patterns evolve over time following the same laws of physics.
The “bucket of water” reservoir computer has its limits. For one thing, the waves are short-lived. To forecast complex processes such as climate change and population growth, we need a reservoir with more durable waves.
One option is “solitons”. These are self-reinforcing waves that keep their shape and move for long distances.
Our reservoir computer used solitary waves like those seen in drinking fountains. Ivan Maksymov, Author provided
For our reservoir computer, we used compact soliton-like waves. You often see such waves in a bathroom sink or a drinking fountain.
In our computer, a thin layer of water flows over a slightly inclined metal plate. A small electric pump changes the speed of the flow and creates solitary waves.
We added a fluorescent material to make the water glow under ultraviolet light, to precisely measure the size of the waves.
The pump plays the role of falling stones in the game played by Alice and Bob, but the solitary waves correspond to the waves on the water surface. Solitary waves move much faster and live longer than water waves in a bucket, which lets our computer process data at a higher speed.
So, how does it perform?
We tested our computer’s ability to remember past inputs and to make forecasts for a benchmark set of chaotic and random data. Our computer not only executed all tasks exceptionally well but also outperformed a high-performance digital computer tasked with the same problem.
With my colleague Andrey Pototsky, we also created a mathematical model that enabled us to better understand the physical properties of the solitary waves.
Next, we plan to miniaturize our computer as a microfluidic processor. Water waves should be able to do computations inside a chip that operates similarly to the silicon chips used in every smartphone.
In the future, our computer may be able to produce reliable long-term forecasts in areas such as climate change, bushfires and financial markets – with much lower cost and wider availability than current supercomputers.
Our computer is also naturally immune to cyber attacks because it does not use digital data.
Our vision is that a soliton-based microfluidic reservoir computer will bring data science and machine learning to rural and remote communities worldwide. But for now, our research work continues.
Over a hundred people gathered at Reformed Living Bible Church in Scottsdale, Arizona, this Sunday to celebrate the homecoming of Jacob Chansley, dubbed the "QAnon Shaman" after he was pictured walking through the U.S. Capitol in a bison-horned headdress on Jan. 6, 2021.
Chansley wasn't wearing the face paint or horns that he's famous for, but instead donned a white suit and American flag-themed tie and signed autographs on mugshots and T-shirts at a table as people lined up to greet him.
“Here I am saying hello, a couple years later, in a much more public way than I ever anticipated, and it’s really surreal, almost like a dream,” Chansley told the crowd, according to AZ Central.
Angeli was sentenced to 41 months in prison in November 2021 after he pleaded guilty to charges related to Jan. 6. He was moved to a halfway house and released last week. Prosecutors say that while he did not commit any violence on Jan. 6, he played an effective role in goading others to breach the Capitol building.
Speaking to the crowd on Sunday, Chansley said that, while he was in prison, he got closer to God than he possibly could have imagined and “felt God's presence on several occasions.”
“We must not sever our kinship based on divisive propaganda, but instead learn to find what we can agree on and rebuild our nation based on those commonly held values,” he said.
Watch the video below or at this link.
\u201cVIDEO THREAD: Jacob Angeli Chansley was recently released early on good behavior from federal custody over January 6. \n\nChansley, dubbed the "QAnon Shaman" by media, autographed mugshots and T-shirts and spoke Sunday at the Reformed Living Bible Church in Scottsdale, Arizona.\u2026\u201d