Deep Bench Briefings Recap: “AI Policy Checkup”


Mar 05, 2026
post featured image

FRB recently launched the Deep Bench Briefings, a webinar series designed for Founders, CEOs, General Counsels, and Chief Legal Officers who need actionable, business-first legal guidance without the fluff.

In our first session, “AI Policy Checkup,” FRB’s Moish Peltz (Co-Managing Partner, Co-Chair of the Artificial Intelligence Practice Group), Jeffrey Berkman (Founding Partner and Chair of the Corporate & Securities Practice Group), and Alexander Migliorini (Senior Corporate Associate) tackled one of the most urgent issues facing businesses today: how to use AI responsibly without creating avoidable legal risk. We focused on the evolving AI regulatory landscape, where AI liability actually arises inside organizations, and how to build practical, defensible AI governance.

The message was clear: AI is already embedded in your business. The question is whether your policies have caught up.

The Legal Landscape Is Moving Quickly

AI regulation is developing rapidly at the state and international levels. While the U.S. does not yet have a comprehensive federal AI statute, companies must navigate emerging state laws and global frameworks.

The EU AI Act is particularly influential. Much like GDPR reshaped privacy practices worldwide, the EU AI Act is setting a global tone for how AI risk is categorized and regulated. Even U.S.-based businesses may be impacted if they serve EU customers, work with EU contractors, or operate internationally.

We also discussed governance frameworks like NIST’s AI Risk Management Framework, which provide structure for identifying, documenting, and mitigating AI risk. Even when not legally required, these frameworks help demonstrate that an organization has implemented thoughtful oversight, and identifies leaders within an organization who will work to evangelize best practices.

Bottom line: AI governance cannot be reactive. The regulatory environment is evolving too fast.

 “Do You Know If You’re Using AI?”

One of the most important questions raised during the session was simple: Do you actually know where AI is being used in your organization?

AI is increasingly embedded in everyday tools, whether Google Workspace, MS365,email platforms, document systems, meeting software, and search functions. Even if employees are not actively prompting ChatGPT, they may still be using AI-enabled features running in the background.

This creates risk in several areas:

  • Contractual exposure: Many older commercial agreements or vendor agreements are silent on AI use. Newer agreements may restrict it.
  • Data exposure: Employees bypassing enterprise safeguards and using personal AI accounts can inadvertently upload confidential or sensitive information outside enterprise controls. Data shows that this “shadow AI” usage is widespread.  Data shows that this “shadow AI” usage is hadow AI” usage is wide.

The takeaway: governance starts with visibility. Companies need to map what tools are in use and establish clear rules for approved platforms and data handling.

Where AI Liability Actually Appears

AI risk typically does not come from dramatic scenarios. It arises from ordinary business relationships, with customers, vendors, and employees.

Customer-Facing AI

If a chatbot provides incorrect information, the company, not the software, is responsible. Our attorneys discussed a real-world example where an airline was held accountable after its chatbot misrepresented fare policies. AI outputs used externally must be reviewed and monitored. “The system said it” is not a legal defense.

Vendor Risk

Vendors using AI tools on your data can create exposure if safeguards are unclear or sloppy. At the same time, your internal AI use may conflict with your contractual commitments, as expressed in vendor or customer agreements. Companies should approach AI vendor diligence the way privacy diligence evolved: review policies, understand training practices, confirm retention rules, and clarify accountability.

Employee and Contractor Risk

Employees will use AI, especially if they believe it increases efficiency. If no enterprise-approved tools are available, they may default to personal accounts with weaker protections. AI policies must meet employees where they are, and cover not only employees but also independent contractors and vendors. Otherwise, a major governance gap remains.

Transcription, Discovery, and Privilege Risks

One particularly practical discussion centered on AI transcription tools, especially in board and executive meetings. AI-generated transcripts may become discoverable in litigation. Even casual brainstorming conversations can create risk if transcripts reflect knowledge or intent that later becomes relevant in a dispute.

There is no one-size-fits-all answer on transcription. Instead, organizations should decide:

  • When transcription is appropriate
  • Where transcripts are stored
  • How long they are retained
  • Who reviews and verifies them before distribution

Clear guardrails are essential.

Common AI Policy Blind Spots

Many early AI policies were drafted quickly when generative AI first gained mainstream attention. Those policies may now be outdated.

Frequent blind spots include:

  • No designated leader being responsible and accountable for an organization’s AI governance
  • No dynamic list of approved tools
  • No accounting for background AI tools that are embedding in other software
  • No rules governing confidential or client data
  • No requirement for human review of outputs
  • No alignment with other workplace, privacy, IP, risk mitigation, or document retention policies

A written policy that is not implemented or enforced provides little protection. In some cases, it can increase exposure if practices do not match stated controls.

Our attorneys also discussed the importance of planning for AI incidents, just as companies prepare for data breaches. If AI-generated content causes harm or error, organizations should have a response structure in place.

Insurance and Forward Planning

AI risk intersects with cyber, E&O, and D&O coverage. As AI-related claims increase, insurers may refine exclusions or introduce AI-specific policies. Companies should review existing coverage to understand what is protected and where gaps may exist.

Key Takeaways

AI is already integrated into daily workflows. Organizations that succeed will:

  • Map how AI is being used
  • Align governance with privacy and contract obligations
  • Require human oversight
  • Vet vendors carefully
  • Update policies continuously

AI governance is not about slowing innovation. It is about protecting the business while enabling it to move forward confidently and rapidly.

Our next Deep Bench Briefings session, “AI at Work: Emerging Employment Risks Exposed by AI Adoption,” is on Tuesday, March 24, 2026, from 2:00 PM – 3:00 PM ET. As AI tools become more common in the workplace, employers must adapt their handbooks and workplace policies to address new legal and operational risks.

This session will cover:

  • Updating employee handbooks to account for AI use
  • Compliance and oversight considerations
  • Employment law risks, including bias and discrimination
  • Confidentiality and intellectual property concerns
  • Building effective AI policies within the employment context

Register for the next webinar here.

If you have questions about your organization's AI policies or would like guidance on building a practical AI governance framework, reach out to FRB's Artificial Intelligence Practice Group. Our attorneys are ready to help you navigate the evolving regulatory landscape and protect your business.

Have Questions? Contact Us