Congress May Finally Take on AI in 2025 – Casson Living – World News, Breaking News, International News

Congress May Finally Take on AI in 2025 – Casson Living – World News, Breaking News, International News

In 2024, artificial intelligence tools became intricately woven into the fabric of everyday life. However, the United States struggled to keep pace with the need for AI regulations, with many proposed bills in Congress either aimed at funding research or mitigating risks falling prey to partisan strife and competing legislative agendas. A case in point is a California bill designed to hold AI companies accountable for any damages; despite passing the state legislature with ease, it was ultimately vetoed by Governor Gavin Newsom.

This legislative stagnation has sparked concern among those wary of AI. In an interview with TIME, Ben Winters, director of AI and data privacy at the Consumer Federation of America, warned, “We’re witnessing a repeat of past failures with privacy and social media—failing to implement protective measures at the outset is critical for safeguarding individuals while still promoting genuine innovation.”

On the flip side, many tech industry advocates have successfully persuaded lawmakers that imposing strict regulations could stifle economic growth. This dynamic has led the U.S. to potentially focus on finding consensus in specific, isolated areas of concern rather than establishing a comprehensive regulatory framework like the E.U.’s AI Act introduced in 2023.

As we look ahead to the new year, several pivotal AI issues are expected to dominate the congressional agenda for 2025.

Tackling Specific AI Threats

One of the urgent matters Congress may prioritize is the emergence of non-consensual deepfake pornography. In 2024, cutting-edge AI tools made it alarmingly easy for individuals to create and share degrading, sexualized images of vulnerable people, especially young women. These images can proliferate rapidly online and have even been weaponized for extortion.

Political leaders, parent advocacy groups, and civil society entities are increasingly acknowledging the need for action against these exploitative practices. However, legislative efforts have often stalled at various stages. Recently, the Take It Down Act, co-sponsored by Texas Republican Ted Cruz and Minnesota Democrat Amy Klobuchar, gained traction after significant media coverage and lobbying efforts, becoming part of a House funding bill. This proposed legislation aims to criminalize the creation of deepfake pornography and requires social media platforms to remove such content within 48 hours of a takedown request.

Despite these developments, the funding bill ultimately collapsed due to substantial opposition from some Trump allies, including Elon Musk. Nevertheless, the inclusion of the Take It Down Act in the bill suggests it gained support from key leaders in both the House and Senate, as noted by Sunny Gandhi, vice president of political affairs at Encode, an AI advocacy organization. Gandhi also pointed out that the Defiance Act, which would empower victims to pursue civil action against creators of deepfake content, may be another legislative focus in the year ahead.

Read More: Time 100 AI: Francesca Mani

Advocates are also expected to push for legislative measures addressing other AI-related issues, such as consumer data protection and the risks associated with companion chatbots that might promote self-harm. A heartbreaking incident earlier this year involved a 14-year-old who tragically took his own life after engaging with a chatbot that encouraged him to “come home.” The difficulties in passing even a seemingly straightforward bill targeting deepfake pornography signal a challenging road ahead for broader legislative initiatives.

Increasing Support for AI Research

In parallel, many lawmakers are working to bolster support for advancing AI technology. Industry proponents frame AI development as a critical race, warning that the U.S. risks falling behind other nations if it fails to invest sufficiently in this area. On December 17, the Bipartisan House AI Task Force released a comprehensive 253-page report stressing the importance of nurturing “responsible innovation.” Task force co-chairs Jay Obernolte and Ted Lieu remarked, “AI has the potential to vastly improve productivity, allowing us to achieve our objectives more efficiently and economically, from optimizing manufacturing to developing treatments for serious ailments.”

In this context, Congress is likely to push for increased funding for AI research and infrastructure. One notable bill that attracted interest but ultimately failed was the Create AI Act, which aimed to establish a national AI research resource accessible to academics, researchers, and startups. “The goal is to democratize participation in this innovation,” said Senator Martin Heinrich, a Democrat from New Mexico and the bill’s primary sponsor, in a July interview with TIME. “We cannot allow this development to be concentrated in just a few regions of the country.”

More controversially, Congress may also explore funding for the integration of AI technologies into military and defense systems. Allies of Trump, including David Sacks, a venture capitalist from Silicon Valley appointed by Trump as his “White House A.I. & Crypto Czar,” have shown interest in applying AI for military purposes. Defense contractors have indicated to Reuters that Elon Musk’s Department of Government Efficiency is likely to pursue collaborative projects between contractors and AI technology firms. In December, OpenAI announced a partnership with defense technology company Anduril to utilize AI in countering drone threats.

This past summer, Congress allocated $983 million to the Defense Innovation Unit, aimed at incorporating new technologies into the Pentagon’s operations—a significant increase compared to previous years. The next Congress might authorize even larger funding packages for similar initiatives. “Historically, the Pentagon has been a tough environment for newcomers, but we are now witnessing smaller defense companies successfully competing for contracts,” explains Tony Samp, head of AI policy at DLA Piper. “There’s now a push from Congress for disruption and a quicker pace of change.”

Senator Thune in the Spotlight

Republican Senator John Thune from South Dakota is set to be a key figure in shaping AI legislation in 2025, particularly as he prepares to become the Senate Majority Leader in January. In 2023, Thune collaborated with Klobuchar to introduce a bill aimed at increasing transparency in AI systems. While he has criticized Europe’s “heavy-handed” regulations, Thune has also voiced support for a tiered regulatory approach that specifically addresses high-risk AI applications.

“I’m hopeful about the potential for positive outcomes, especially since the Senate Majority Leader will be one of the leading Republicans engaged in tech policy discussions,” Winters notes. “This could open doors for more legislative initiatives concerning issues like children’s privacy and data protection.”

Trump’s Role in Shaping AI Policy

As Congress delves into AI legislation in the coming year, it will undoubtedly take cues from President Trump. His position on AI technology remains somewhat unclear, as he will likely be swayed by a diverse array of Silicon Valley advisors, each with their own views on AI. For instance, Marc Andreessen champions rapid AI development, while Musk has voiced concerns about the existential risks tied to the technology.

Some expect a primarily deregulation-oriented approach from Trump, but Alexandra Givens, CEO of the Center for Democracy & Technology, notes that Trump was the first president to issue an executive order on AI in 2020, which emphasized the technology’s potential impacts on individual rights, privacy, and civil liberties. “We hope he continues to shape the conversation this way and that AI doesn’t become a divisive issue along party lines,” she adds.

Read More: What Donald Trump’s Win Means For AI

State Initiatives May Surpass Congressional Efforts

Given the typical challenges of passing legislation at the federal level, it’s likely that state legislatures will step up to create their own AI regulations. More progressive states may tackle AI risks that a Republican-controlled Congress might avoid, such as racial and gender biases in AI systems or their environmental implications. For instance, Colorado recently passed a law regulating AI use in high-stakes scenarios like job, loan, and housing application screenings. “This approach addressed high-risk applications while remaining relatively unobtrusive,” explains Givens. In Texas, a lawmaker has introduced a similar bill that is set to be reviewed in the upcoming legislative session, while New York is considering legislation that would restrict the construction of new data centers and require reporting on their energy consumption.