U.S. Republican presidential candidate Donald Trump speaks at a campaign rally in Bloomington, Illinois, March 13, 2016. REUTERS/Jim Young
A witness who's backing a woman's child rape claims against Donald Trump also accused the Republican presidential nominee of sexually assaulting another, even younger girl.
Trump is due to appear in federal court Dec. 16 for a status conference after a judge allowed the lawsuit, which seeks $75,000 in damages, to move forward.
The alleged victim, identified in the suit as "Jane Doe," claims Trump brutally raped her in 1994, when she was 13 years old, and threatened to harm her and her family if she talked.
The suit was originally filed last year in California by a woman named Katie Johnson, but that case was thrown out May 2 because the complaint failed to properly state any specific federal civil rights violations.
The lawsuit was refiled in New York in June, but without Johnson's name, her request for $100 million and absent several explosive claims from the previous suit -- including an allegation that Trump had given money to the victim and ordered her to get an abortion.
Two other women -- identified as "Joan Doe" and "Tiffany Doe" -- have been added to the newer suit as witnesses.
Both witnesses say they worked as "party planners" for billionaire pedophile Jeffrey Epstein, who paid them to "attract adolescent women" to events he hosted at the Wexner Mansion in New York.
Tiffany Doe says in court documents that she lured Jane Doe to a party with the promise of money and meeting contacts in the modeling industry.
She claims in the documents that she personally witnessed the girl being forced to engage in various sex acts with Trump and Epstein, who she said were aware of her age.
"I personally witnessed four sexual encounters that the Plaintiff was forced to have with Mr. Trump during this period, including the fourth of these encounters where Mr. Trump forcibly raped her despite her pleas to stop," Tiffany Doe alleges.
Tiffany Doe said she also witnessed Trump forcing Jane Doe and a 12-year-old girl identified as "Maria" to perform oral sex on him and then physically abuse both of them afterward.
The woman said her job duties required her to "personally witness and supervise encounters between the underage girls that Mr. Epstein hired and his guests," according to court documents.
Epstein, a financier who was also friends with Bill and Hillary Clinton, was convicted in 2008 of soliciting an underage girl for prostitution and served 13 months of an 18-year prison term.
Tiffany Doe said both Trump and Epstein threatened to harm Jane Doe if she ever revealed the physical and sexual abuse she endured -- and she said the future GOP presidential nominee's warning was particularly ominous.
"I personally witnessed Defendant Trump telling the Plaintiff that she shouldn't ever say anything if she didn't want to disappear like the 12-year-old female Maria, and that he was capable of having her whole family killed," Tiffany Doe alleged.
Tiffany Doe said after she stopped working for Epstein in 2002, he threatened to kill her and her family if she ever revealed the child rape operation he oversaw.
The woman, who started working for Epstein in 1990, said she had put herself at great risk by agreeing to back Jane Doe's claims against Trump but swore her allegations were truthful.
"I fully understand that that the life of myself and my family is now in grave danger," she said.
Trump’s in-fighting legal team mixed with a former president who has never met “a camera he didn’t love” is the recipe for “an epic disaster,” an MSNBC columnist wrote Monday.
The professional standards usually associated with attorney-client relationships have been “sometimes bent to the point of breaking,” wrote Katie S. Phang, host of the Katie Phang Show.
She added: “This kind of havoc does not bode well for Trump’s legal future.”
Trump has surrounded himself with an army of lawyers as he faces a series of trials and investigations including the Stormy Daniels hush money fraud case; a second defamation case from E. Jean Carroll; a $250 million civil lawsuit accusing him, three of his children and the Trump Organization of fraud; special counsel investigations into his keeping of classified documents and allegations that he tried to overturn the 2020 presidential election; and a probe by Georgia District Attorny Fani Willis into more allegations of tampering with election results.
And the lawyers are very publicly fighting, Phang wrote.
Attorney Tim Parlatore recently quit – via testifying to the classified documents investigation – saying he couldn’t give counsel to Trump because of obstacles thrown up by another lawyer, Boris Epshteyn.
He also criticized another lawyer on the team, Joe Tacopina, over a potential conflict of interest because Tacopina had been previously approached to discuss possibly representing Stormy Daniels.
Evan Corcoran, who was Trump’s lead attorney in the classified documents case, resigned after being subpoenaed to testify before a grand jury against his client. And former Trump White House counsel Pat A. Cipollone and deputy counsel Patrick Philbin have also testified to a grand jury about accusations that the former president tried to overturn the 2020 election.
“Several Trump lawyers have had to retain their own lawyers due to their representation of Trump,” wrote Phang.
“The newest iteration of “MAGA” might as well now stand for “Making Attorneys Get Attorneys.”
“Let’s also not forget other former Trump lawyers like Rudy Giuliani, Sidney Powell, Jenna Ellis, and John Eastman, all of whom are facing ethics complaints affecting their ability to practice law in various jurisdictions, as well as several investigations for their roles as Trump’s counsel,” wrote Phang.
She added: “The public continues, with a combination of fascination and disgust, to watch the train wreck that is Trump Legal World unfold like a political iteration of The Hunger Games. Which attorney will be left standing at the end?”
Health practitioners are increasingly concerned that because race is a social construct, and the biological mechanisms of how race affects clinical outcomes are often unknown, including race in predictive algorithms for clinical decision-making may worsen inequities.
For example, to calculate an estimate of kidney function called the estimated glomerular filtration rate, or eGFR, health care providers use an algorithm based on age, biological sex, race (Black or non-Black) and serum creatinine, a waste product the kidneys release into the blood. A higher eGFR value means better kidney health. These eGFR predictions are used to allocate kidney transplants in the U.S.
Based on this algorithm, which was trained on actual GFR values from patients, a Black patient would be assigned a higher eGFR than a non-Black patient of the same age, sex and serum creatinine level. This implies that some Black patients would be considered to have healthier kidneys than otherwise similar non-Black patients and less likely to be assigned a kidney transplant.
Biased clinical algorithms can lead to inaccurate diagnoses and delayed treatment.
In 2021, however, researchers found that excluding race in the original eGFR equations could lead to larger discrepancies between estimated and actual GFR values for both Black and non-Black patients. They also found adding an additional biomarker called cystatin C can improve predictions. However, even with this biomarker, excluding race from the algorithm still led to elevated discrepanies across races.
I am a health economist and statistician who studies how unobserved factors in data can result in biases that lead to inefficiencies, inequities and disparities in health care. My recently published research suggests that excluding race from certain diagnostic algorithms could worsen health inequities.
Different approaches to fairness
Researchers use different economic frameworks to understand how society allocates resources. Two key frameworks are utilitarianism and equality of opportunity.
A purely utilitarian outlook seeks to identify what features would get the most out of a positive outcome or reduce the harm from a negative one, ignoring who possesses those features. This approach allocates resources to those with the most opportunities to generate positive outcomes or mitigate negative ones.
A utilitarian approach would always include race and ethnicity to improve the prediction power and accuracy of algorithms, regardless of whether it’s fair. For example, utilitarian policies would aim to maximize overall survival among people seeking organ transplants. They would allocate organs to those who would survive the longest from transplantation, even if those who may not survive the longest due to circumstances outside their control and need the organs most would die sooner without the transplant.
Although utilitarian approaches do not take fairness into account, an approach that does would ask two questions: How do we define fairness? Are there conditions when maximizing an algorithm’s prediction power and accuracy would not conflict with fairness?
To answer these questions, I apply the equality of opportunity framework, which aims to allocate resources in a way that allows everyone the same chance of obtaining similar outcomes, without being disadvantaged by circumstances outside of their control. Researchers have used this framework in many contexts, such as political science, economics and law. The U.S. Supreme Court has also applied equality of opportunity in several landmark rulings in education.
There are two fundamental principles in equality of opportunity.
First, inequality of outcomes is unethical if it results from differences in circumstances that are outside of an individual’s own control, such as the income of a child’s parents, exposure to systemic racism or living in violent and unsafe environments. This can be remedied by compensating individuals with disadvantaged circumstances in a way that allows them the same opportunity to obtain certain health outcomes as those who are not disadvantaged by their circumstances.
Second, inequality of outcomes for people in similar circumstances that result from differences in individual effort, such as practicing health-promoting behaviors like diet and exercise, is not unethical, and policymakers can reward those achieving better outcomes through such behaviors. However, differences in individual effort that occur because of circumstances, such as living in an area with limited access to healthy food, are not addressed under equality of opportunity. Keeping all circumstances the same, any differences in effort between individuals should be due to preferences, free will and perceived benefits and costs. This is called accountable effort. So, two individuals with the same circumstances should be rewarded according to their accountable efforts, and society should accept the resulting differences in outcomes.
Equality of opportunity implies that if algorithms were to be used for clinical decision-making, then it is necessary to understand what causes variation in the predictions they make.
If variation in predictions results from differences in circumstances or biological conditions but not from individual accountable effort, then it is appropriate to use the algorithm for compensation, such as allocating kidneys so everyone has an equal opportunity to live the same length of life, but not for reward, such as allocating kidneys to those who would live the longest with the kidneys.
In contrast, if variation in predictions results from differences in individual accountable effort but not from their circumstances, then it is appropriate to use the algorithm for reward but not compensation.
Evaluating clinical algorithms for fairness
To hold machine learning and other artificial intelligence algorithms accountable to a standard of equity, I applied the principles of equality of opportunity to
evaluate whether race should be included in clinical algorithms. I ran simulations under both ideal data conditions, where all data on a person’s circumstances is available, and real data conditions, where some data on a person’s circumstances is missing.
As a social construct, race is often a proxy for nonbiological circumstances.
I evaluated two categories of algorithms.
The first, diagnostic algorithms, makes predictions based on outcomes that have already occurred at the time of decision-making. For example, diagnostic algorithms are used to predict the presence of gallstones in patients with abdominal pain or urinary tract infections, or to detect breast cancer using radiologic imaging.
The second, prognostic algorithms, predicts future outcomes that have not yet occurred at the time of decision-making. For example, prognostic algorithms are used to predict whether a patient will live if they do or do not obtain a kidney transplant.
I found that, under an equality of opportunity approach, diagnostic models that do not take race into account would increase systemic inequities and discrimination. I found similar results for prognostic models intended to compensate for individual circumstances. For example, excluding race from algorithms that predict the future survival of patients with kidney failure would fail to identify those with underlying circumstances that make them more vulnerable.
Including race in prognostic models intended to reward individual efforts can also increase disparities. For example, including race in algorithms that predict how much longer a person would live after a kidney transplant may fail to account for individual circumstances that could limit how much longer they live.
Unanswered questions and future work
Better biomarkers may one day be able to better predict health outcomes than race and ethnicity. Until then, including race in certain clinical algorithms could help reduce disparities.
Although my study uses an equality of opportunity framework to measure how race and ethnicity affect the results of prediction algorithms, researchers don’t know whether other ways to approach fairness would lead to different recommendations. How to choose between different approaches to fairness also remains to be seen. Moreover, there are questions about how multiracial groups should be coded in health databases and algorithms.
My colleagues and I are exploring many of these unanswered questions to reduce algorithmic discrimination. We believe our work will readily extend to other areas outside of health, including education, crime and labor markets.
Can a computer learn from the past and anticipate what will happen next, like a human? You might not be surprised to hear that some cutting-edge AI models could achieve this feat, but what about a computer that looks a little different – more like a tank of water?
We have built a small proof-of-concept computer that uses running water instead of a traditional logical circuitry processor, and forecasts future events via an approach called “reservoir computing”.
In benchmark tests, our analogue computer did well at remembering input data and forecasting future events – and in some cases it even did better than a high-performance digital computer.
So how does it work?
Throwing stones in the pond
Imagine two kids, Alice and Bob, playing at the edge of a pond. Bob throws big and small stones into water one at a time, seemingly at random.
Big and small stones create water waves of different size. Alice watches the water waves created by the stones and learns to anticipate what the waves will do next – and from that, she can have an idea of which stone Bob will throw next.
Bob throws rocks into the pond, while Alice watches the waves and tries to predict what’s coming next.Yaroslav Maksymov, Author provided
Reservoir computers copy the reasoning process taking place in Alice’s brain. They can learn from past inputs to predict the future events.
Although reservoir computers were first proposed using neural networks – computer programs loosely based on the structure of neurons in the brain – they can also be built with simple physical systems.
Reservoir computers are analogue computers. An analogue computer represents data continuously, as opposed to digital computers which represent data as abruptly changing binary “zero” and “one” states.
Representing data in a continuous way enables analogue computers to model certain natural events – ones that occur in a kind of unpredictable sequence called a “chaotic time series” – better than a digital computer.
How to make predictions
To understand how we can use a reservoir computer to make predictions, imagine you have a record of daily rainfall for the past year and a bucket full of water near you. The bucket will be our “computational reservoir”.
We input the daily rainfall record to the bucket by means of stone. For a day of light rain, we throw a small stone; for a day of heavy rain, a big stone. For a day of no rain, we throw no rock.
Each stone creates waves, which then slosh around the bucket and interact with waves created by other stones.
At the end of this process, the state of the water in the bucket gives us a prediction. If the interactions between waves create large new waves, we can say our reservoir computer predicts heavy rains. But if they are small then we should expect only light rain.
It is also possible that the waves will cancel one another, forming a still water surface. In that case we should not expect any rain.
The reservoir makes a weather forecast because the waves in the bucket and rainfall patterns evolve over time following the same laws of physics.
The “bucket of water” reservoir computer has its limits. For one thing, the waves are short-lived. To forecast complex processes such as climate change and population growth, we need a reservoir with more durable waves.
One option is “solitons”. These are self-reinforcing waves that keep their shape and move for long distances.
Our reservoir computer used solitary waves like those seen in drinking fountains. Ivan Maksymov, Author provided
For our reservoir computer, we used compact soliton-like waves. You often see such waves in a bathroom sink or a drinking fountain.
In our computer, a thin layer of water flows over a slightly inclined metal plate. A small electric pump changes the speed of the flow and creates solitary waves.
We added a fluorescent material to make the water glow under ultraviolet light, to precisely measure the size of the waves.
The pump plays the role of falling stones in the game played by Alice and Bob, but the solitary waves correspond to the waves on the water surface. Solitary waves move much faster and live longer than water waves in a bucket, which lets our computer process data at a higher speed.
So, how does it perform?
We tested our computer’s ability to remember past inputs and to make forecasts for a benchmark set of chaotic and random data. Our computer not only executed all tasks exceptionally well but also outperformed a high-performance digital computer tasked with the same problem.
With my colleague Andrey Pototsky, we also created a mathematical model that enabled us to better understand the physical properties of the solitary waves.
Next, we plan to miniaturize our computer as a microfluidic processor. Water waves should be able to do computations inside a chip that operates similarly to the silicon chips used in every smartphone.
In the future, our computer may be able to produce reliable long-term forecasts in areas such as climate change, bushfires and financial markets – with much lower cost and wider availability than current supercomputers.
Our computer is also naturally immune to cyber attacks because it does not use digital data.
Our vision is that a soliton-based microfluidic reservoir computer will bring data science and machine learning to rural and remote communities worldwide. But for now, our research work continues.