Federal courtrooms are about to get their first evidence rule written specifically for artificial intelligence. Proposed Federal Rule of Evidence 707 would subject machine-generated output to the same reliability scrutiny that Daubert imposes on expert witnesses, and its arrival is closer than most practitioners realize. For criminal defense attorneys, understanding FRE 707 AI evidence standards is not academic: this rule will shape how prosecutors introduce algorithmic output, and it will define the battleground for challenging that output at trial.

This post explains what the proposed rule actually says, where it sits in the Rules Enabling Act pipeline, and what defense counsel should do now to get ready.

What Proposed FRE 707 Actually Says

The Advisory Committee on Evidence Rules published the draft text for public comment in August 2025. The proposed Federal Rule of Evidence 707 reads:

“When machine-generated evidence is offered without an expert witness and would be subject to Rule 702 if testified to by a witness, the court may admit the evidence only if it satisfies the requirements of Rule 702(a)–(d). This rule does not apply to the output of simple scientific instruments.” [1 ]

In plain terms: if the government (or any proponent) offers a machine’s output as proof, and a human expert giving the same opinion would have to clear Daubert, then the machine output must clear Daubert too. Reliability, methodology, error rate, known limitations, and application to the facts of the case all become gatekeeping questions [2 ].

The carve-out for “simple scientific instruments” keeps breathalyzers, radar guns, and thermometers on existing tracks. Deterministic readings from well-understood devices stay out of the new regime [3 ].

The Procedural Timeline: Closer Than It Looks

The rule is moving. Here is the current state of the proposed Rule 707 pipeline, all under the Rules Enabling Act at 28 U.S.C. § 2074 [4 ]:

  • May 2, 2025: Advisory Committee on Evidence Rules recommended publication.
  • June 10, 2025: Standing Committee approved for public comment.
  • August 15, 2025 to February 16, 2026: Public comment period (now closed).
  • January 15 and January 29, 2026: Advisory Committee public hearings.
  • Spring 2026: Advisory Committee will vote on a final draft after processing comments.
  • June 2026: Expected final report to the Standing Committee.
  • September 2026: Likely Judicial Conference session.
  • May 1, 2027: Deadline for SCOTUS to transmit any adopted rule to Congress.
  • December 1, 2027: Default effective date absent congressional action [5 ].

The public phase is over. Defense bar influence at this point flows through amicus submissions to the Standing Committee and, later, through litigation under the adopted rule.

The Acknowledgment Trap: Where Defense Leverage Lives

Read the text carefully. FRE 707 only applies when the evidence is “offered” as machine-generated. If the prosecution slips algorithmic output into the case without labeling it as AI-generated evidence, Rule 707 does not trigger by its own terms. The AI evidence admissibility gate swings only when the proponent acknowledges the nature of the evidence [2 ].

That is a defense problem and a defense opportunity. The problem: government proponents have an incentive to avoid the label. The opportunity: a timely motion in limine can force the issue. Demand disclosure of any algorithmic tooling used to produce an exhibit, compel the government to state on the record whether the exhibit is machine-generated, and if it is, insist on a Rule 707 and Rule 702 reliability hearing before the jury ever sees it.

The New York City Bar’s comment letter to the Advisory Committee specifically flagged this gap and urged the Committee to narrow the rule to “machine-generated inferential evidence” to avoid the simpler-output ambiguity [6 ].

FRE 707 Is Not the Deepfake Rule: Meet Proposed 901(c)

A common confusion: FRE 707 does not solve the deepfake problem. When a party contests whether an image, audio clip, or video was fabricated by generative AI, that is an authentication question, and the Advisory Committee has routed it to a separate proposal, Federal Rule of Evidence 901(c) [7 ].

The draft Rule 901(c) places a two-step burden on the parties. First, the challenger must produce “evidence sufficient to support a finding” that the item has been fabricated in whole or part by generative AI. Then the proponent must demonstrate authenticity by a preponderance of the evidence [8 ].

The Advisory Committee has not yet published Rule 901(c) for public comment. It remains on the agenda and is moving on a slower track. For defense attorneys, the takeaway is simple: proposed Rule 707 polices reliability of machine output; proposed Rule 901(c) polices authenticity of contested AI fakery. The two are distinct, and they will mature on separate calendars. Our post on identifying synthetic images explores the authenticity side in practical detail.

Louisiana Beat the Federal Courts to It

States are not waiting. Louisiana Act 250 of 2025 (House Bill 178) amended Louisiana Code of Civil Procedure article 371 and took effect on August 1, 2025, making Louisiana the first state with an AI evidence framework in its procedural rules [9 ].

The Louisiana rule takes a different angle. Rather than imposing a Daubert-style reliability gate, it imposes attorney-diligence obligations. Counsel must “exercise reasonable diligence to verify the authenticity of evidence” before offering it; offering evidence the attorney knew or should have known was AI-fabricated or artificially manipulated, without disclosure, is a statutory violation. Parties with reasonable suspicion that an opponent’s exhibits are AI-generated must raise the issue at a pretrial conference or pretrial admissibility hearing [10 ].

Louisiana is therefore a complement to proposed Rule 707, not a mirror. The federal rule gates reliability; the state rule polices counsel’s duty of candor. Expect other state legislatures to follow with their own variations before the federal rule lands.

Early Case Law Points Toward the New Regime

Courts are already grappling with AI evidence under existing doctrine, and the decisions foreshadow how FRE 707 will operate.

In State v. Puloka, King County Superior Court No. 21-1-04851-2 KNT (Wash. Super. Ct. Mar. 29, 2024), the trial court excluded defense-offered AI-enhanced video produced with Topaz Video AI, holding the technique failed both Frye and ER 702 reliability tests. It is believed to be the first published U.S. ruling on AI-enhanced video admissibility in a criminal case [11 ], [12 ].

In Matter of Weber, 2024 NY Slip Op 24258 (Surrogate’s Ct., Saratoga County, Oct. 10, 2024), the court rejected expert testimony whose valuation analysis relied on Microsoft Copilot, holding that any expert use of generative AI must be disclosed and tested in a Frye hearing [13 ].

And in United States v. Heppner, a Southern District of New York decision issued in early 2026, Judge Rakoff held that prompts and outputs exchanged between a party and Anthropic’s Claude were not protected by attorney-client privilege or the work-product doctrine [14 ]. For AI-generated evidence standards, Heppner matters because it signals that the raw material of an AI analysis, the prompts and intermediate outputs, may be fully discoverable against the party that generated them.

What AI Evidence Defense Attorneys Should Do Right Now

The rule is not final. The practice implications are. Five steps belong in every criminal defense intake checklist today.

Expand Your Litigation Hold Language

Standard document-preservation language does not reach AI artifacts. Update your holds to cover prompts, completions, system logs, model version strings, temperature and seed settings, API call records, and vendor-side retention. When you issue a preservation letter to an opponent, name these categories explicitly.

Retain a Machine-Learning Expert Early

Daubert AI evidence challenges turn on the model, not just its output. Credentials to look for include data provenance experience, benchmark-methodology work, bias and error-rate evaluation, and peer-reviewed publication. If your expert cannot explain a confusion matrix in their sleep, keep looking.

Build a Subpoena Target List

Before the Federal Rule of Evidence 707 ever takes effect, you can discover this material under existing rules. Target (a) training-data descriptions, (b) model cards and evaluation reports, (c) error-rate testing on the specific task class at issue, (d) post-deployment retraining history, and (e) vendor SOC 2 or audit reports. The NYC Bar has separately urged that Civil Rule 26 and Criminal Rule 16 be amended to compel this material; until they are, aggressive use of existing subpoena powers is the workaround [6 ].

File Motions in Limine at the Acknowledgment Trigger

The proposed Rule 707 text only triggers on acknowledgment. When you suspect the government is introducing algorithmic output without the label, move in limine to compel disclosure and a reliability hearing. Cite Weber for the disclosure-and-hearing premise and Puloka for the Frye/702 exclusion outcome. Our post on authenticating social media evidence develops parallel reasoning for screenshot challenges, and our hardware-signed C2PA camera credentials post explains why provenance metadata matters at the threshold.

Counsel Your Client About Their Own AI Use

After Heppner, assume that your client’s casual AI use may be discoverable and unprivileged. Advise clients against running case facts through consumer chatbots. If AI is used in case preparation, do it under counsel’s direct supervision with a protocol that documents prompts, outputs, and the reasoning that led to retaining or discarding them.

Conclusion

Proposed Federal Rule of Evidence 707 will become law on a predictable schedule: a spring 2026 Advisory Committee vote, a June 2026 report, a Judicial Conference session later that year, SCOTUS transmission by May 1, 2027, and a default effective date of December 1, 2027. The AI-generated evidence standards it imports are Daubert standards, and the key defense lever is the acknowledgment trigger. Deepfake evidence rules will land separately through Rule 901(c). Louisiana has already moved at the state level. Early federal decisions like Puloka, Weber, and Heppner preview how courts will behave under the new regime. Preparation beats panic.

If you are defending a case with algorithmic prosecution evidence, or you need an expert to dissect machine-generated output before trial, Lucid Truth Technologies can help. Reach out through our contact form and we will connect you with the forensic analysis and expert testimony your case needs.

References

[1] Committee on Rules of Practice and Procedure, Preliminary Draft of Proposed Amendments to the Federal Rules, Administrative Office of the U.S. Courts, August 2025. [Online]. Available: https://www.uscourts.gov/sites/default/files/document/preliminary-draft-of-proposed-amendments-to-federal-rules_august2025.pdf

[2] “New Evidence Rule 707 Would Set Standards for AI-Generated Courtroom Evidence,” National Law Review, 2024. [Online]. Available: https://natlawreview.com/article/new-evidence-rule-707-would-set-standards-ai-generated-courtroom-evidence

[3] “Safeguarding the Courtroom from AI-Generated Evidence: FRE 707 Approved by Judicial Conference,” Nelson Mullins Red Zone, 2025. [Online]. Available: https://www.nelsonmullins.com/insights/blogs/red-zone/news/safeguarding-the-courtroom-from-ai-generated-evidence-federal-rule-of-evidence-707-approved-by-judicial-conference

[4] 28 U.S.C. § 2074, Cornell Legal Information Institute. [Online]. Available: https://www.law.cornell.edu/uscode/text/28/2074

[5] “Pending Rules and Forms Amendments,” U.S. Courts. [Online]. Available: https://www.uscourts.gov/forms-rules/pending-rules-and-forms-amendments

[6] New York City Bar Association, “Comments on Proposed Federal Rule of Evidence 707 and Amendments to Rule 609,” 2026. [Online]. Available: https://www.nycbar.org/reports/comments-on-proposed-federal-rule-of-evidence-707-and-amendments-to-rule-609/

[7] “A Deepfake Evidentiary Rule (Just in Case),” UIC Law Library News. [Online]. Available: https://library.law.uic.edu/news-stories/a-deepfake-evidentiary-rule-just-in-case/

[8] R. A. Delfino, “Deepfakes on Trial 2.0: A Revised Proposal for a New Federal Rule of Evidence to Mitigate Deepfake Deceptions in Court,” SSRN, 2025. [Online]. Available: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5188767

[9] “Dismissed on a Technicality: Louisiana Addresses AI Evidence,” Smart On Crime LA, 2025. [Online]. Available: https://www.smartoncrimela.com/blog/dismissed-on-a-technicality-louisiana-addresses-ai-evidence

[10] A. B. Garcia, “AI Meets the Courtroom: Louisiana Sets Ground Rules for Artificial Evidence,” Mondaq / Deutsch Kerrigan, 2025. [Online]. Available: https://www.mondaq.com/unitedstates/new-technology/1656204/ai-meets-the-courtroom-louisiana-sets-ground-rules-for-artificial-evidence

[11] “Washington Court Rejects Novel Use of AI-Enhanced Video in Trial,” Greenberg Traurig, 2024. [Online]. Available: https://www.gtlaw.com/en/insights/2024/5/washington-court-rejects-novel-use-of-ai-enhanced-video-in-trial

[12] “Court Excludes AI-Enhanced Videos from Trial Evidence,” ABA Litigation News, 2024. [Online]. Available: https://www.americanbar.org/groups/litigation/resources/litigation-news/2024/fall/court-excludes-aienhanced-videos-trial-evidence/

[13] Matter of Weber, 2024 NY Slip Op 24258 (Surr. Ct., Saratoga Cty., Oct. 10, 2024), Justia. [Online]. Available: https://law.justia.com/cases/new-york/other-courts/2024/2024-ny-slip-op-24258.html

[14] “United States v. Heppner,” Harvard Law Review Blog, March 2026. [Online]. Available: https://harvardlawreview.org/blog/2026/03/united-states-v-heppner/