The top prosecutor who led the investigation into the now-defunct Trump University explained why he thinks Trump Organization Chief Financial Officer Allen Weisselberg held a larger position at the company than his business card title revealed.
Former New York Assistant Attorney General Tristan Snell was interviewed by MSNBC chief legal correspondent Ari Melber on "The Beat."
"There are experts here who say money man only scratches the surface, he was at times basically the acting CEO," Melber noted. "What, if anything, can you tell us about that based on your knowledge and is that good or bad for him as he faces this heat?"
"Well, first off, you can find out a lot about an organization, about a company, about a target without actually having them cooperate with you. So we were able to get a lot of information on how the Trump Organization worked and Weisselberg's role in it despite the fact that we did not have Weisselberg's cooperation," Snell explained. "We never even felt like we needed to bring him in as a witness because we had already had enough knowledge of everything that we didn't really need Weisselberg, but we were still able to find the Trump Organization liable and a lot of why the judge in the Trump University case decided that Trump Organization was liable was because of Weisselberg's very heavy-handed day-to-day control of the organization."
"Very much he was the acting CEO. I would also say that he was basically the COO, the Chief Operating Officer of the Trump Organization, a role which has never really been filled, at least not that I know of in the past 20 years," he said, although Matthew Calameri has taken on the position since the end of the Trump University case.
"So the CFO, you know, they're the ones actually keeping the books and tracking the P&Is and seeing exactly what money is coming in, what money is going out, and making high-level decisions based on that. Weisselberg was doing more than that, he would decide which businesses would live and which businesses would die," he said "A lot of times it was Weisselberg who was the enforcer, he was not just the bean counter, he had a lot of power within the organization to determine what businesses were going to do what, which ones would go forward and which ones would be shut down."
This past fall, a police report in Foster County, North Dakota claimed that a 42-year-old man named Shannon Brandt had run over 18-year-old Cayler Ellingson after he allegedly accused the teen of being part of a "Republican extremist group."
Nonetheless, the national attention from Fox News created a firestorm in McHenry, North Dakota, a tiny town of just 64 residents.
Or as Ashley Brandt-Duda, Brandt's sister, told the Times, "everything just exploded" thanks to Fox News' sensationalizing of the story.
"Ms. Brandt-Duda said her parents left their home in McHenry out of concern for their safety. When they returned about a week later, they found more than 50 threatening messages on their answering machine," reports the Times. "The county court and sheriff’s offices also received numerous threats, according to multiple local officials."
The report concludes by noting that coverage of Ellingson's murder almost completely died on Fox News once it was determined to have not been political -- but no prime time hosts ever acknowledged that this was the case.
Trump’s in-fighting legal team mixed with a former president who has never met “a camera he didn’t love” is the recipe for “an epic disaster,” an MSNBC columnist wrote Monday.
The professional standards usually associated with attorney-client relationships have been “sometimes bent to the point of breaking,” wrote Katie S. Phang, host of the Katie Phang Show.
She added: “This kind of havoc does not bode well for Trump’s legal future.”
Trump has surrounded himself with an army of lawyers as he faces a series of trials and investigations including the Stormy Daniels hush money fraud case; a second defamation case from E. Jean Carroll; a $250 million civil lawsuit accusing him, three of his children and the Trump Organization of fraud; special counsel investigations into his keeping of classified documents and allegations that he tried to overturn the 2020 presidential election; and a probe by Georgia District Attorny Fani Willis into more allegations of tampering with election results.
And the lawyers are very publicly fighting, Phang wrote.
Attorney Tim Parlatore recently quit – via testifying to the classified documents investigation – saying he couldn’t give counsel to Trump because of obstacles thrown up by another lawyer, Boris Epshteyn.
He also criticized another lawyer on the team, Joe Tacopina, over a potential conflict of interest because Tacopina had been previously approached to discuss possibly representing Stormy Daniels.
Evan Corcoran, who was Trump’s lead attorney in the classified documents case, resigned after being subpoenaed to testify before a grand jury against his client. And former Trump White House counsel Pat A. Cipollone and deputy counsel Patrick Philbin have also testified to a grand jury about accusations that the former president tried to overturn the 2020 election.
“Several Trump lawyers have had to retain their own lawyers due to their representation of Trump,” wrote Phang.
“The newest iteration of “MAGA” might as well now stand for “Making Attorneys Get Attorneys.”
“Let’s also not forget other former Trump lawyers like Rudy Giuliani, Sidney Powell, Jenna Ellis, and John Eastman, all of whom are facing ethics complaints affecting their ability to practice law in various jurisdictions, as well as several investigations for their roles as Trump’s counsel,” wrote Phang.
She added: “The public continues, with a combination of fascination and disgust, to watch the train wreck that is Trump Legal World unfold like a political iteration of The Hunger Games. Which attorney will be left standing at the end?”
Health practitioners are increasingly concerned that because race is a social construct, and the biological mechanisms of how race affects clinical outcomes are often unknown, including race in predictive algorithms for clinical decision-making may worsen inequities.
Based on this algorithm, which was trained on actual GFR values from patients, a Black patient would be assigned a higher eGFR than a non-Black patient of the same age, sex and serum creatinine level. This implies that some Black patients would be considered to have healthier kidneys than otherwise similar non-Black patients and less likely to be assigned a kidney transplant.
Biased clinical algorithms can lead to inaccurate diagnoses and delayed treatment.
In 2021, however, researchers found that excluding race in the original eGFR equations could lead to larger discrepancies between estimated and actual GFR values for both Black and non-Black patients. They also found adding an additional biomarker called cystatin C can improve predictions. However, even with this biomarker, excluding race from the algorithm still led to elevated discrepanies across races.
Researchers use different economic frameworks to understand how society allocates resources. Two key frameworks are utilitarianism and equality of opportunity.
A purely utilitarian outlook seeks to identify what features would get the most out of a positive outcome or reduce the harm from a negative one, ignoring who possesses those features. This approach allocates resources to those with the most opportunities to generate positive outcomes or mitigate negative ones.
A utilitarian approach would always include race and ethnicity to improve the prediction power and accuracy of algorithms, regardless of whether it’s fair. For example, utilitarian policies would aim to maximize overall survival among people seeking organ transplants. They would allocate organs to those who would survive the longest from transplantation, even if those who may not survive the longest due to circumstances outside their control and need the organs most would die sooner without the transplant.
Although utilitarian approaches do not take fairness into account, an approach that does would ask two questions: How do we define fairness? Are there conditions when maximizing an algorithm’s prediction power and accuracy would not conflict with fairness?
To answer these questions, I apply the equality of opportunity framework, which aims to allocate resources in a way that allows everyone the same chance of obtaining similar outcomes, without being disadvantaged by circumstances outside of their control. Researchers have used this framework in many contexts, such as political science, economics and law. The U.S. Supreme Court has also applied equality of opportunity in several landmark rulings in education.
There are two fundamental principles in equality of opportunity.
First, inequality of outcomes is unethical if it results from differences in circumstances that are outside of an individual’s own control, such as the income of a child’s parents, exposure to systemic racism or living in violent and unsafe environments. This can be remedied by compensating individuals with disadvantaged circumstances in a way that allows them the same opportunity to obtain certain health outcomes as those who are not disadvantaged by their circumstances.
Second, inequality of outcomes for people in similar circumstances that result from differences in individual effort, such as practicing health-promoting behaviors like diet and exercise, is not unethical, and policymakers can reward those achieving better outcomes through such behaviors. However, differences in individual effort that occur because of circumstances, such as living in an area with limited access to healthy food, are not addressed under equality of opportunity. Keeping all circumstances the same, any differences in effort between individuals should be due to preferences, free will and perceived benefits and costs. This is called accountable effort. So, two individuals with the same circumstances should be rewarded according to their accountable efforts, and society should accept the resulting differences in outcomes.
Equality of opportunity implies that if algorithms were to be used for clinical decision-making, then it is necessary to understand what causes variation in the predictions they make.
If variation in predictions results from differences in circumstances or biological conditions but not from individual accountable effort, then it is appropriate to use the algorithm for compensation, such as allocating kidneys so everyone has an equal opportunity to live the same length of life, but not for reward, such as allocating kidneys to those who would live the longest with the kidneys.
In contrast, if variation in predictions results from differences in individual accountable effort but not from their circumstances, then it is appropriate to use the algorithm for reward but not compensation.
Evaluating clinical algorithms for fairness
To hold machine learning and other artificial intelligence algorithms accountable to a standard of equity, I applied the principles of equality of opportunity to
evaluate whether race should be included in clinical algorithms. I ran simulations under both ideal data conditions, where all data on a person’s circumstances is available, and real data conditions, where some data on a person’s circumstances is missing.
As a social construct, race is often a proxy for nonbiological circumstances.
I evaluated two categories of algorithms.
The first, diagnostic algorithms, makes predictions based on outcomes that have already occurred at the time of decision-making. For example, diagnostic algorithms are used to predict the presence of gallstones in patients with abdominal pain or urinary tract infections, or to detect breast cancer using radiologic imaging.
The second, prognostic algorithms, predicts future outcomes that have not yet occurred at the time of decision-making. For example, prognostic algorithms are used to predict whether a patient will live if they do or do not obtain a kidney transplant.
I found that, under an equality of opportunity approach, diagnostic models that do not take race into account would increase systemic inequities and discrimination. I found similar results for prognostic models intended to compensate for individual circumstances. For example, excluding race from algorithms that predict the future survival of patients with kidney failure would fail to identify those with underlying circumstances that make them more vulnerable.
Including race in prognostic models intended to reward individual efforts can also increase disparities. For example, including race in algorithms that predict how much longer a person would live after a kidney transplant may fail to account for individual circumstances that could limit how much longer they live.
Unanswered questions and future work
Better biomarkers may one day be able to better predict health outcomes than race and ethnicity. Until then, including race in certain clinical algorithms could help reduce disparities.
Although my study uses an equality of opportunity framework to measure how race and ethnicity affect the results of prediction algorithms, researchers don’t know whether other ways to approach fairness would lead to different recommendations. How to choose between different approaches to fairness also remains to be seen. Moreover, there are questions about how multiracial groups should be coded in health databases and algorithms.
My colleagues and I are exploring many of these unanswered questions to reduce algorithmic discrimination. We believe our work will readily extend to other areas outside of health, including education, crime and labor markets.