top of page
Color logo - no background.png

Trump’s Anti-Regulation Pitch Is Exactly What the AI Industry Wants to Hear

As the prospect of Artificial General Intelligence (AGI)—AI capable of surpassing human performance across most tasks—looms closer, Donald Trump’s presidency marks the beginning of a transformative era. Yet, his early comments on AI reflect a mix of enthusiasm and confusion, leaving his strategic direction unclear. 

In a podcast interview with YouTube influencer Logan Paul, Trump referred to superintelligence as “super-duper AI,” revealing a limited grasp of the technology. While he voiced alarm over the dangers of deep fakes, calling them “scary” and “alarming,” he was equally captivated by large language models capable of drafting impressive speech scripts. Praising their speed and output, Trump joked that AI might one day replace his speechwriter. 

Trump and Logan Paul on the Impaulsive podcast. Source: YouTube 

These remarks illustrate Trump’s dual perspective: a fascination with AI’s transformative potential paired with a lack of nuanced understanding of its risks. 

Silicon Valley and the Battle Over AI Regulation 

The tech industry is deeply divided over the future of AI development, with two dominant camps shaping the conversation. On one side are “accelerationists” (or “e/accs”), who oppose regulation and advocate for unbridled technological advancement. On the other side are proponents of “AI alignment,” who focus on ensuring AI systems adhere to ethical standards and human values to mitigate risks. 

Accelerationists often dismiss safety advocates as “decelerationists” or “doomers,” while alignment proponents warn of catastrophic outcomes if AI development proceeds recklessly. Within this polarized landscape, Trump’s administration is expected to favor accelerationist ideals, minimizing regulation to promote rapid innovation. 

Prominent accelerationist figures have celebrated Trump’s election as a win for their cause. @bayeslord, a leader in the movement, declared on X: “We may actually be on the threshold of the greatest period of technological acceleration in history, with nothing in sight that can hold us back, and clear open roads ahead.” 

However, this accelerationist optimism clashes with concerns from AI safety advocates, who argue that unchecked development could amplify societal risks, from biased algorithms to existential threats. 

Policy Implications: AI Regulation and the CHIPS Act 

Trump’s approach to AI regulation is expected to be shaped by his broader anti-regulation stance. He has already signaled plans to rescind President Joe Biden’s 2023 executive order on AI, which aimed to address risks such as discrimination in hiring and decision-making processes. Republicans have criticized these measures as excessively “woke,” with Dean Ball of the Mercatus Center noting that they “gave people the ick.” 

Additionally, Trump’s administration may target the US AI Safety Institute, an initiative launched to ensure the safe development of AI technologies. Led by alignment advocate Paul Christiano, the institute represents a focal point for the Biden-era regulatory framework that Trump is likely to dismantle or reshape. 

On the semiconductor front, Trump has criticized the CHIPS and Science Act, which was designed to bolster US semiconductor manufacturing—a critical component of advanced AI systems. However, there is bipartisan hope that his opposition is mostly rhetorical. Maintaining leadership over China in AI development is likely to influence Trump’s eventual support for policies that strengthen the semiconductor supply chain. During his podcast with Logan Paul, Trump underscored the importance of AI leadership, stating, “We have to be at the forefront. We have to take the lead over China.” 

AI Safety and Republican Perspectives 

Despite expectations of a deregulation-focused agenda, AI safety advocates believe Trump’s administration could be more open to their concerns than accelerationists assume. Sneha Revanur, founder of Encode Justice, points out that partisan lines on AI policy are not clearly defined, leaving room for nuanced discussions about risk mitigation. 

Surprisingly, elements within Trump’s orbit have already engaged with safety-focused perspectives. In September, Ivanka Trump posted about “Situational Awareness,” a manifesto by former OpenAI researcher Leopold Aschenbrenner that warns of AGI triggering a global conflict with China. The post sparked widespread discussion, with some speculating about Ivanka’s potential influence on Trump’s policy decisions. 

Other Republicans have raised concerns about AI’s societal impact. Senator Josh Hawley has criticized lax safety measures at AI companies, while Senator Ted Cruz proposed legislation to ban AI-generated revenge porn. Vice President-elect JD Vance has pointed to left-wing bias in AI systems as a significant issue. 

These concerns suggest that the GOP’s stance on AI may extend beyond accelerationism to include targeted measures addressing specific risks. 

The Role of Elon Musk: Ally or Critic? 

One of the most influential voices in the AI debate is Elon Musk, whose views on regulation add complexity to the discussion. While accelerationists hail Musk as a hero, his support for stronger oversight complicates this narrative. Musk has called for a regulatory body to monitor AI companies and supported California’s SB 1047, a rigorous AI regulation bill opposed by major tech firms. 

Musk’s advocacy for regulation stems in part from his public fallout with OpenAI, which he co-founded but left in 2018. Since then, he has criticized the organization, launched lawsuits against it, and established his own rival company, X.ai Corp. This rivalry, combined with Musk’s evolving views on AI safety, makes him a wildcard in shaping Trump’s AI policies. 

Navigating Contradictions: Innovation vs. Safety 

Republicans face a significant challenge in reconciling their stance on AI development. While the party is critical of Silicon Valley and wary of empowering tech giants, it also recognizes the need to stay ahead of global competitors like China. This tension is likely to influence their policy decisions in the coming years. 

According to Casey Mock, chief policy officer at the Center for Humane Technology, Republicans are more likely to focus on immediate, tangible issues. Concerns such as deepfake pornography and students using AI to cheat on homework are expected to dominate the agenda, while long-term risks like AGI misalignment may take a backseat. 

This pragmatic approach aligns with the party’s broader emphasis on addressing “kitchen table” issues that resonate with everyday Americans. 

Shaping the Future of AI 

As the first president of the AGI era, Trump’s policies will have far-reaching implications for the future of AI development. His administration’s accelerationist leanings suggest a push for minimal regulation, but internal party concerns and pressure from safety advocates could lead to a more balanced approach. 

The AGI era represents a transformative moment in human history. How Trump navigates this period will not only define his presidency but also shape the trajectory of AI’s integration into society. With immense opportunities and significant risks at stake, the world will be watching closely as this story unfolds. 

 

bottom of page