". . . and having done all . . . stand firm." Eph. 6:13

Newsletter

The News You Need

Subscribe to The Washington Stand

X
Article banner image
Print Icon
News

Federal Judges Admit to Using AI to Formulate Retracted Court Rulings

October 24, 2025

Two rulings from two federal judges were retracted over the summer. For months, no one knew why, since there was nothing but silence on the matter. Now, however, they’ve confessed: the opinions were created using artificial intelligence (AI).

Both of the rulings were initially released in the summer. New Jersey U.S. District Judge Julien Xavier Neals and Mississippi Judge Henry T. Wingate admitted to Senate Judiciary Committee Chairman Chuck Grassley (R-Iowa) that the use of AI led to errors in their rulings. As The Washington Times reported, “One judge said a law school intern wrongly used the AI, and the other said it was a clerk in his office. In both cases, the judges signed the final rulings.” Both judges claimed they’re working to ensure this does not happen again. But as far as Grassley is concerned, it never should have occurred in the first place.

“We can’t allow laziness, apathy or overreliance on artificial assistance to upend the judiciary’s commitment to integrity and factual accuracy,” Grassley asserted. “As always, my oversight will continue.” Other experts have chimed in, such as University of Louisville law professor Susan Tanner. “The legal system depends on careful deliberation and verification,” she said. Conversely, “Generative AI … is built for speed and fluency. That mismatch between AI’s quick confidence and the slow, careful work of legal reasoning means we need to be really thoughtful about how we integrate this technology, not just reactive.”

The New Jersey judge said his office does not allow AI use, and he even noted that the law school the intern attends also forbids its use. Meanwhile, the Mississippi judge said his office lacks any guidelines on the matter. In any case, “the Administrative Office of the U.S. Courts, which oversees federal judicial operations, is reviewing how AI is used in courts,” WT noted.

The faulty rulings, one from June and the other from July, raised eyebrows when skeptics noticed signs of “AI hallucinations” — the presence of fictional case names, inaccurate retelling of released rulings, and false quotes from legitimate decisions. According to the Times, “For several years, the legal world has been rife with stories of AI hallucinations making their way into attorneys’ briefs, and some judges have been stern in imposing sanctions on lawyers who have filed AI-polluted briefs.”

Yet, even with the judges’ confessions and promises to course-correct moving forward, these instances draw attention to a larger conversation on the role of AI in society — particularly as it bleeds into the career force and academic institutions, undermining court rulings and threatening professional and academic integrity. In fact, during a panel discussion at Family Research Council’s 2025 Pray Vote Stand Summit, FRC’s Vice President for Policy and Government Affairs Travis Weber posed a fitting question: “Should we be concerned about AI?”

If you ask Brandon Maddick, software engineer and head of product for the Christian AI platform Dominion, the answer is “definitely.” As he explained, it’s not merely the risk of job loss and its impact on the economy we have to address, but also the way AI has crept into personal lives — whether that be how a student operates or the chatbots that have young people viewing AI as their boyfriends or girlfriends.

“My initial response is put the brakes on hard,” Weber stated. “Let’s even back out of where we are.” Yet, AI is likely not going anywhere. So, Weber asked, “How do we practically think about it?” Moreover, “What do we need to know about that in terms of the impact of this technology on the world around us?” In response, Jon Frendl, a tech entrepreneur and founder of the custom app development firm Cappital, offered a detailed explanation of how AI development is only just beginning.

“[T]he way to really do a lot of work in AI is you build several AIs that help to build even better AIs, and those better AIs help you build even better AIs. So, there’s an exponential nature to that. And when you combine that with the amount of investment across the board internationally, and then really you can look at power companies and chips, which are the fundamental things necessary behind this.” Understanding all this, he stressed, shows us that it’s only “just getting started, and it’s going to radically change things.” However, as both Frendl and Maddick concluded, these advanced and ever-advancing AI developments, especially if left unchecked, could potentially drive people back to a place of desiring true meaning — church, real human connections, and genuine effort.

As “with everything in the world,” Maddick stated, “Christians have the ability to reject it, receive it as it is, or redeem it. … [R]ejecting AI is a dangerous option for those souls that are not yet saved, because of the way that our community will look very different. Receiving it is not an option because of the embedded problems that we’re already starting to see … that are only going to worsen as it becomes more pervasive. And so, the option left to us is to redeem it, to take dominion, and put AI under Christ’s lordship for the benefit of all.”

Sarah Holliday is a reporter at The Washington Stand.



Amplify Our Voice for Truth