Trump’s Vision for AI Fails to Make America Healthy Again
By Cat Wang & Haleh Cohn
President Donald Trump’s reversal of his predecessor’s plan to regulate artificial intelligence (AI) safety may give way to a new era of AI MAGA bias that fails to make America healthy again. The world must take note.
Biased algorithms have the potential to perpetuate and amplify systemic discrimination. This is especially troubling at a time when Trump is attacking all diversity, equity, and inclusion (DEI) efforts –– sidelining the very frameworks needed to address these biases.
The president’s hostility to AI regulation and his opposition to DEI policies go hand in hand. In October 2023, then-president Joe Biden signed an executive order to oversee AI innovation, requiring developers to share safety test results with the government prior to public release. A key point was to help “prevent the bias in AI tools that can be used to make decisions,” he said at a news conference. But on Trump’s first week in office, he rescinded the order, replacing it with a new order stating AI systems must be “free from ideological bias.” That same week, he signed another executive order terminating “radical” government DEI.
Make no mistake: Bias –– whether personal or ideological, human or artificial –– is not so easily removed. AI systems use algorithms to process large amounts of data and identify patterns, which are then used to make predictions and inform decisions. But if these models are fed biased data, they will spit out biased results. AI has the potential to replicate and amplify human biases in three major ways: 1) data underrepresentation in training, 2) flawed data labelling, and 3) biased human assumptions underlying algorithms. The populations who risk being underserved by AI tools are folks who are already underserved. That doesn’t mean AI is always flawed –– it just needs to be fact-checked.
We humans need to pay attention: many studies have discovered examples of bias baked into algorithms in healthcare.
A 2019 paper in Science exposed significant racial bias in an algorithm used by large health systems. The tool, used to identify high-risk patients for provision of additional care-management resources, assigned less sick white patients the same risk score as their much sicker Black counterparts. The algorithm used healthcare costs as a proxy for health status, but this assumption failed to capture that less money is spent on Black patients, further disadvantaging Black populations in accessing care-management.
A 2021 publication in Nature reported underdiagnosis bias in AI algorithms used on chest radiographs. When tasked with identifying individuals’ disease status through their scans, the AI model consistently overlooked underserved patient subpopulations, falsely concluding they were healthy when they required treatment. Underdiagnosis rates were even greater for underserved populations when factoring in intersectionality, demonstrating how quickly biases compound when AI fails to consider critical social factors.
The research points strongly and repeatedly to the need for more regulation, not less. The latest evidence shows AI has perpetuated bias in the past, and we can safely predict it will do so again in future; flawed algorithms drive poor outcomes. Disadvantaged groups will bear the brunt initially, but the "black box" of AI means any variable is at risk of bias. You’ll care when your own health is on the line. We may not even recognize the unjust systems at play: bias is elusive, sneaking in unnoticed. We can’t address problems we don’t know exist; regulatory measures can help us remove our AI blinkers.
Trump claims he wants to be non-ideological about who wins or loses from AI, but by limiting transparency and discounting equity, his administration is undermining its own stated goals. While it's easy to get caught up in the flurry of Trump’s political upheaval, AI is here to stay, and it calls for bipartisan, long-term planning.
By signing both executive orders in practically the same stroke, Trump is sending a message: AI hype is in, DEI woke is out. While AI has the potential to revolutionize medicine, we have to be able to check it when it goes wrong. Evidence-based science is thoroughly peer-reviewed prior to public release. AI science should not be exempt.
Cat Wang (left) holds a Master of Science in Public Health from McGill University. She is a science communicator at the McGill Office for Science and Society.
Haleh Cohn (right) is a Bachelor of Science candidate at McGill University and a science communicator at the McGill Office for Science and Society.