Two Supreme Courts Finalized AI Rules for Judges. The US Hasn’t.
Americans are watching a movie about an AI judge. Mercy, the Chris Pratt and Rebecca Ferguson thriller, became the number-one movie on Prime Video globally in March. The premise is that a detective has 90 minutes to prove his innocence to an AI judge before it sentences him to death. Millions of people are encountering the idea of AI in the courtroom as a dystopian spectacle.
The actual version is more mundane and already in production. Last fall, two US federal judges had to publicly explain to Senator Chuck Grassley how AI-generated errors appeared in their court orders. Judge Henry T. Wingate of the Southern District of Mississippi issued an order that named plaintiffs and defendants who were not parties to the case. Judge Julien Xavier Neals of the District of New Jersey issued an opinion with fabricated citations and misattributed quotes. Both said their chambers’ staff had used generative AI tools to draft the documents before anyone verified the output. Perplexity in one case, ChatGPT in the other. A 2026 Northwestern University survey of federal judges found that 61.6% of responding judges had used at least one AI tool in their judicial work. The response rate was 22.3%. Even on a limited-response survey, the number is large enough that the absence of binding disclosure rules is no longer theoretical. AI is already inside American courtrooms. It is not wearing a robe. It is drafting opinions.
The US federal judiciary currently operates on interim guidance distributed by the Administrative Office of the US Courts in July 2025. The guidance is non-binding and does not require disclosure to litigants. It cautions against using AI for “core judicial functions,” but it is not a rule. The Federal Rules of Evidence have a proposed amendment, Rule 707, that goes to a vote by the Evidence Rules Committee on May 7. Rule 707 governs AI-generated evidence introduced by parties, not AI used by judges themselves. Senator Grassley has publicly called on the Judicial Conference to develop binding rules for judicial AI use. None have yet been issued.
In the same window, two other supreme courts wrote binding rules and finished them. The choice of these two as comparison points is not meant as a representative global survey. It is meant to show that binding public rules for judicial AI use already exist somewhere, while the US judiciary still operates on guidance that is neither binding nor publicly available in full.
On February 18, 2026, the Supreme Court of the Philippines adopted the Governance Framework on the Use of Human-Centered Augmented Intelligence in the Judiciary. The framework applies to every level of the Philippine judicial system, from the Supreme Court to lower court employees and third-party vendors. It requires judges, court officials, and employees to disclose in plain language when AI tools are used in preparing documents. It prohibits AI from serving as the sole or primary basis of any adjudicatory outcome. It bans AI for cognitive behavioral manipulation and for real-time biometrics-based identification and tracking of people. And it establishes a permanent Committee on Human-Centered Augmented Intelligence to evaluate AI use, oversee procurement, and recommend removal of tools that create harm. No AI tool may be used in the Philippine judiciary without approval from the Supreme Court En Banc.
Around the same time, Paraguay’s Supreme Court of Justice approved Resolution No. 12,677, an institutional AI policy developed with UNESCO technical cooperation. The resolution prohibits delegating judicial decision-making to AI. It bans using AI without human supervision to determine guilt, establish penalties, or calculate compensation. It requires judges to disclose AI use and mark AI-generated text. Both rules now apply to every judge in their respective systems.
The comparison is about which country has decided the public gets to know the rules. The US federal system is bigger, federalism-layered, and built on constitutional judicial independence that makes top-down rules on judicial conduct harder to impose than rules on parties or evidence. The Philippines and Paraguay can move faster in part because their systems are structurally able to, but it is not a defense of the status quo. The Federal Rules of Civil Procedure and the Federal Rules of Evidence already bind federal judges. The Rules Enabling Act process exists. No one has yet started it for AI use by judges themselves.
Disclosure matters even when the AI output is later reviewed by a human. A litigant whose case is decided by a judge has a basic interest in knowing whether AI helped draft the opinion, what it was used for, and whether the output was checked. Senator Grassley’s intervention and the judges’ withdrawal of opinions are forms of accountability, but they depend on outside actors noticing errors that happen to be visible. A disclosure rule does not require the error to be visible. It requires any AI use to be disclosed.
The question a litigant in a US federal court can reasonably ask tomorrow is whether the judge ruling on their case used AI to draft the decision, whether any staff member did, and whether the output was checked. The answers right now are a matter of individual judges’ personal policy or a non-binding guidance document the public cannot read in full. In the Philippines and Paraguay, the answers are in a public resolution. Whether those rules will be enforced in practice is a separate question that only time will answer. Binding rules requiring disclosure and limiting AI use exist there, and the public can read them.
Mercy treats its dystopia as science fiction, a future in which an AI renders a verdict with no humans in the decision. The real version is less cinematic. It is an AI draft of a court order making its way onto the public record before anyone has checked the citations. It is a litigant reading the decision in their own case and finding that a party they have never heard of is listed as a defendant. It is the disclosure rule that would have surfaced the AI use, still being drafted while the errors keep happening. The real dystopia is the absence of rules about how AI gets into the judge’s work in the first place.