Trump’s Executive Order to Target Ideological Bias in Government AI
The order addresses growing concerns over biased AI outputs, which can have far-reaching consequences in government applications.

On Wednesday, President Donald Trump signed an executive order to ensure artificial intelligence (AI) systems used by the federal government remain free from ideological bias, specifically targeting what the administration labels “woke” influences, such as diversity, equity, and inclusion (DEI) principles, critical race theory, and related concepts. This directive, one of three AI-focused orders aimed at countering China’s technological advancements, mandates that tech companies prove their AI models—like Google’s Gemini, Microsoft’s Copilot, and OpenAI’s ChatGPT—are ideologically neutral to secure lucrative federal contracts. The policy reflects a broader push to prioritize American values in technology, emphasizing “truth-seeking” AI, a principle championed by Elon Musk’s xAI and its Grok chatbot.
The order addresses growing concerns over biased AI outputs, which can have far-reaching consequences in government applications. AI systems, trained on vast internet datasets, often reflect the biases of their source material, including societal prejudices embedded in online content. For instance, Google’s Gemini AI faced backlash in February 2024 for generating historically inaccurate images, such as depicting America’s Founding Fathers as Black, Asian, or Native American, due to overcorrections for racial bias. Such errors can erode public trust and distort critical decision-making in areas like law enforcement, hiring, or resource allocation, where federal agencies increasingly rely on AI.
Biased AI poses significant risks. Inaccurate or skewed outputs can perpetuate unfair treatment, misinform policy decisions, or undermine national security. For example, a 2023 study by the National Institute of Standards and Technology found that facial recognition AI used by federal agencies misidentified individuals of certain ethnicities at higher rates, potentially leading to wrongful profiling. Similarly, AI-driven hiring tools have been shown to favor certain demographics if trained on biased datasets, risking discriminatory outcomes in government employment. The Trump administration argues that intentional efforts to encode DEI or other ideological frameworks into AI exacerbate these issues, creating systems that prioritize agendas over accuracy.
The executive order requires tech companies to disclose internal policies guiding their AI’s behavior, ensuring no deliberate partisan or ideological judgments are embedded. This transparency aims to prevent what the administration calls “destructive” top-down efforts to shape AI outputs, such as hard-coding diversity quotas or suppressing certain viewpoints. Influenced by Silicon Valley figures like David Sacks and conservative strategist Chris Rufo, the policy avoids China’s heavy-handed AI censorship—where models are audited to filter out banned content like references to Tiananmen Square—but uses federal contracts as leverage to encourage self-regulation. Neil Chilson, former Federal Trade Commission chief technologist, described the approach as “light touch,” noting it only mandates disclosure of bias mitigation efforts, not specific output restrictions.
Tech giants have responded cautiously. OpenAI stated its ChatGPT aligns with the directive’s objectivity goals, emphasizing existing efforts to minimize bias. Microsoft, Anthropic, Google, Meta, and Palantir have not commented publicly, reflecting the industry’s delicate navigation of this policy. xAI, recently awarded a $200 million Defense Department contract, praised Trump’s AI initiatives but sidestepped the anti-bias order, despite scrutiny over Grok’s recent antisemitic outputs, which the company attributed to outdated code and quickly addressed.
The policy reverses Biden-era AI initiatives, which prioritized addressing racial and gender biases but faced criticism for overcorrecting, as seen in Google’s 2024 image generation errors. Critics of the new order, like former Biden official Jim Secreto, argue that achieving ideological neutrality is challenging due to the inherent biases in AI training data, which reflect the contradictions of human language. Nonetheless, the administration contends that ensuring neutrality in government AI enhances fairness, strengthens public trust, and bolsters national security, aligning with efforts to maintain U.S. technological dominance over rivals like China.
A study period will precede formal procurement rules, giving companies time to adapt. Failure to comply could jeopardize access to billions in federal contracts, pressuring firms to align their AI systems with the administration’s vision. The outcome will shape how AI is developed and deployed across government agencies, with implications for fairness, accuracy, and America’s global tech leadership.
Like this article