California Courts Adopt AI Regulation: An Example of Good Guidance for the Public Sector

California Courts Adopt AI Regulation

California Courts Adopt AI Regulation: An Example of Good Guidance for  the Public Sector

Giving credit to California regulatory policies is not something I frequently consider doing (definitely wasn’t on my 2025 bingo card); however, the court system’s recent adoption of rules for court officers in the use of Generative AI represents a good model for jurisdictions across the country to consider.  As those of us who follow AI in the news have seen the horror stories of AI fabricating case citations in motions and other court documents, this rule adoption marks a significant step forward in balancing technological innovation with public safety and trust (Greenberg, 2025). As the largest court system in the United States takes this regulatory approach(which some other jurisdictions have done on a smaller scale), it sets a pretty good precedent that benefits not only the citizenry, but public sector agencies and employees as well. This framework provides a roadmap for responsible AI use that companies like Policereports.ai have already embraced in our work with law enforcement.

The new rules, which take effect on September 1, 2025, address concerns about data privacy, accuracy, and potential bias in AI-generated content (Sloan, 2025). By prohibiting the entry of confidential information into public AI systems and mandating human oversight of AI outputs, these regulations safeguard sensitive data and ensure that AI remains a tool to improve human efficiency and accuracy, rather than replace it. The rules also mandate that court staff and judicial officers take reasonable steps to verify the accuracy of AI-generated material and correct any erroneous or fabricated output. Additionally, the rules require compliance with all applicable laws, court policies, and ethical standards when using generativeAI (Greenberg, 2025).

For the public, this means greater protection of personal information and increased transparency in court proceedings. According to the new rules, when AI is used to generate documents, the authors and courts must disclose this fact, allowing citizens to understand how the technology is being employed in the judicial process (Greenberg, 2025). This transparency builds trust and ensures that AI use aligns with the public's expectations of a fair and impartial justice system. The rules also explicitly prohibit the entry of confidential, personal identifying, or other nonpublic information into public AI systems, safeguarding sensitive data such as social security numbers, medical information, and sealed court records.

The rules also allow courts to adopt more restrictive policies if needed, providing flexibility to address specific local concerns or operational requirements (Sloan, 2025). As long as individual courts do not take a too restrictive approach to further policies, the established framework ensures a good balance of AI use and protections.

Like the policies established by the courts in this California example, law enforcement agencies too must have clear guidelines on responsible AI use. The court system's emphasis on privacy, accuracy verification, and bias mitigation provides a valuable model for law enforcement to follow, ensuring that AI-assisted police work maintains the highest standards of integrity and fairness.

At Policereports.ai, we are already ahead of the curve in implementing similar safeguards. Our document completion solution for law enforcement incorporates many of the principles outlined in California's new court rules. Policereports.ai's system operates on secure, closed networks, prioritizing data privacy and security. It incorporates robust human review processes to verify the accuracy of content and remove any potential bias, ensuring that the final reports meet the highest standards of integrity. If agencies or jurisdictions desire, our system maintains transparency by clearly indicatingwhen AI has assisted in report generation, allowing for full accountability.

By aligning with these principles, Policereports.ai demonstrates how AI can be leveraged responsibly in the public safety sector, improving efficiency without compromising on accuracy or accountability.As other states look to California's example, we can expect to see similar regulations adopted across the country. This widespread adoption of responsible AI practices in both the court system and law enforcement will ultimately lead to a more efficient, transparent, and trustworthy justice system, which allows for the benefits of increases in efficiency, while also protecting the rights and safety of the citizens we protect.

To see my previous blog post which provides more in-depth details of how Policereports.ai increases efficiency at agencies, while maintaining the highest ethical and legal standards in AI use, follow this link: https://www.policereports.ai/blog/unpacking-ai-use-in-law-enforcement-document-completion

To review the complete rules as adopted by California Courts, see this link: 25-109 - 20250718-25-109 - updated 20250616

References:Greenberg, H. (2025, August 10). Landmark Ruling: California Courts Adopt Generative AI Policies.

Los Angeles Magazine. https://lamag.com/news/landmark-ruling-california-courts-adopt-generative-ai-policies

Sloan, K. (2025, July 18). California court system adopts rule on AI use. Reuters. https://www.reuters.com/legal/government/california-court-system-adopts-rule-ai-use-2025-07-18/