Deep Bench Briefings Recap: “AI at Work: Emerging Employment Risks Exposed by AI Adoption”
FRB recently launched the Deep Bench Briefings, a webinar series designed for Founders, CEOs, General Counsels, and Chief Legal Officers who need actionable, business-first legal guidance without the fluff.
In our second Deep Bench Briefings session, FRB's Moish Peltz (Co-Managing Partner and Co-Chair of the Artificial Intelligence Practice Group) was joined by Elizabeth Schlissel (Partner in the Labor & Employment Practice Group) to tackle one of the most pressing legal challenges facing employers today: the growing web of risks created by AI adoption in the workplace.
From shadow AI to AI-enhanced employee complaints to hiring discrimination, the session made one thing clear: your employees are already using AI. The question is whether your organization is ready for what comes with it.
Your Employees Are Using AI (Whether You Know It or Not)
The session opened with a foundational point that framed everything that followed. As Elizabeth put it plainly: "Your employees are using AI, whether you have an AI policy or not, and whether you have an AI tool that you're giving them."
The Top 10 AI Workplace Risks
Moish and Liz walked through ten distinct categories of AI-related employment risk. Here are the highlights:
Shadow AI
Even companies that have invested in enterprise-grade AI tools are finding that employees continue to use public tools-on personal phones, personal laptops, or other unauthorized devices-often without appreciating the risk. A written policy stating "use only approved tools" is a starting point, but it is not enough on its own. Companies need monitoring mechanisms, clear consequences for misuse, and ongoing training to keep the walled garden intact.
Confidential and Proprietary Information
When employees input sensitive data into non-enterprise AI platforms, the results can be irreversible. Consumer-tier and many base paid-tier tools may log inputs, retain data, or use it for training purposes. Of particular concern is the potential loss of trade secret status. A trade secret must be maintained as a secret and a company must use reasonable efforts to maintain its secrecy. If proprietary algorithms, customer lists, pricing models, or business strategies are shared with an open AI system, a court may find that the company failed to adequately protect that information-undermining any future trade secret claim.
Consumer vs. Enterprise Tool Selection
Not all paid AI tiers are created equal. The word "Pro" or "Max" next to a plan does not guarantee enterprise-level data protection. Companies need to read the fine print-or, as Moish suggested, ask the AI tool itself to explain the differences between tiers. Enterprise plans typically offer disabled training defaults, configurable data retention policies, employee usage monitoring, and compliance frameworks like HIPAA or SOC 2. The cost of upgrading to an enterprise plan is almost always less than the cost of a data breach or noncompliance incident.
AI Governance Beyond the Written Policy
A written AI policy is your first line of defense, but it cannot be your only one. Policies get signed and forgotten. True AI governance requires a designated owner or committee responsible for ongoing oversight, regular policy updates, cross-departmental integration, and employee training. Frameworks like the NIST AI Risk Management Framework provide structure for treating AI governance as a continuous cycle, not a one-time compliance exercise.
AI Across Employment Agreements
AI governance cannot stop at the employee level. Independent contractor agreements, vendor agreements, and service contracts all need to address AI use explicitly, specifying which tools are permitted, who owns the inputs and outputs, and what data protections are required. The fact that a contractor uses their own tools does not exempt them from these obligations if they are handling company data.
AI-Created Workplace Complaints
One of the most eye-opening segments of the session addressed an emerging trend Liz has observed firsthand: employees using AI tools to draft and refine workplace complaints. What used to be a two-sentence email is now a detailed, legally precise document that hits every element of a potential discrimination or retaliation claim.
The consequences cut both ways. On one hand, legitimate complaints are now better articulated and harder to dismiss. On the other, employees without actionable claims are using AI to learn what would constitute a claim. This can result in manufactured or embellished complaints that employers are still obligated to investigate. Liz coined the term "hallucinated plaintiff" to describe complaints where, during an actual investigation, the facts told verbally simply don't match the AI-polished written version.
The takeaway for employers: any written complaint must be taken seriously and investigated thoroughly, regardless of how it was drafted.
AI-Powered Employee Surveillance
Enterprise AI tools often monitor employee inputs and outputs-which is valuable from a governance perspective but creates its own obligations. In New York, employers are required under Civil Rights Law Section 52-C to provide written notice of electronic monitoring to all employees upon hiring. That notice should be specific enough to describe the tools being used and the nature of the monitoring. As AI-based monitoring becomes more sophisticated, these disclosure obligations will only grow.
AI in Hiring and Automated Employment Decision Tools
New York City's Local Law 144 requires employers using automated employment decision tools (AEDTs) in hiring or promotion decisions to conduct bias audits-and that requirement is now drawing more scrutiny after a recent audit revealed significant enforcement failures. Beyond New York City, New York has enacted the RAISE Act, Colorado has a comprehensive AI Act in effect, California has enacted its own law, and a growing number of states are introducing similar legislation.
The underlying risk is real: AI tools trained on existing employee data can perpetuate existing biases, even unintentionally. An AI system asked to find candidates "like our top performers" may systematically disadvantage protected classes without anyone issuing a discriminatory instruction. Intent is not the standard-disparate impact is sufficient for liability under many frameworks.
Attorney-Client Privilege and Work Product
AI creates a significant and often overlooked privilege risk. When an employee uses a personal or non-enterprise AI tool to organize their thoughts or draft materials related to a legal matter-even to prepare for a meeting with in-house counsel-that interaction is likely not protected by attorney-client privilege or the work product doctrine.
The Heppner case, which Moish and Liz have both written about, illustrates how information shared with a third-party AI platform before engaging counsel can lose its privilege protection. The practical guidance here: bring in the General Counsel earlier in the process, use only designated privileged workspaces for legal workflows, and train both legal and non-legal staff on how to maintain privilege in an AI-enabled environment.
Ongoing AI Training
All of the governance frameworks, policies, and enterprise tools in the world only work if employees know how to use them and why it matters. AI training should not be a one-time onboarding item. It should be integrated into regular team meetings, departmental updates, and company-wide communications. Organizations should identify internal champions, document what "good" looks like, and create space for employees who may be slower to adopt new technology to get up to speed without being left behind.
Key Takeaways
The consistent message throughout the session was that AI governance in the workplace is not a destination; it's an ongoing process. As Liz put it in her closing remarks: "AI is here to stay… take the time and invest in yourself and your company to learn how to properly use AI now."
For employers, that means:
- Assume your employees are using AI and build governance accordingly
- Read the fine print on every AI tool your organization uses or licenses
- Update your agreements, whether employment, contractor, and vendor to explicitly address AI
- Invest in enterprise-tier tools and configure them properly
- Treat AI governance as a living function, not a policy document filed away after signing
- Understand the patchwork of laws that apply based on where your employees work
If you have questions about your organization's AI policies, employment agreements, or AI governance framework, reach out to FRB's Artificial Intelligence or Labor & Employment attorneys. Our attorneys are ready to help you navigate these evolving risks and build a practical, defensible approach to AI in the workplace.
Our next Deep Bench Briefings session, “Identifying AI & Privacy Blind Spots Before They Become Liabilities,” will take place on Wednesday, April 15, 2026, from 12:00 PM – 1:00 PM ET. This session will explore:
- Aligning vendor agreements with internal privacy policies
- Managing risks associated with employee use of shadow IT
- Identifying gaps between data practices and public disclosures
- Strengthening governance around AI and data usage
- Proactive steps to mitigate privacy and compliance exposure
Register for the next webinar here.

