Responsible AI Adoption: Policies, Training, and Practical First Steps
By: Jeffrey W. Berkman, Esq., LL.M., Douglas E. Singer, Esq., and Alexander R. Migliorini, Esq., MBA
If your organization is still “waiting to see what happens” with AI, you are not alone. The important thing to remember is that it’s never too late to start thoughtfully engaging with AI, and even small, well-guided steps can lead to meaningful progress.
We recently hosted a webinar “Building an AI-Ready Business: Strategy, Policy, and Practice,” FRB’s Jeffrey Berkman, Doug Singer, and Alexander Migliorini, alongside Kathy D’Agostino of Win at Business.AI. The panel explored a reality many businesses are facing: the AI hype can feel overwhelming, the risks are real, and yet the opportunity is too big to ignore. The good news is you do not need a massive tech buildout to begin. You need a clear purpose, the right guardrails, and a thoughtful rollout that earns trust.
Here are the core takeaways and a practical path forward to AI adoption for companies ready to move from hesitation to confident action.
AI Doesn’t Have to Be Scary - But It Does Have to Be Intentional
A big barrier to adoption is fear: fear of job loss, fear of getting it wrong, fear of losing control of sensitive data. Our speakers made a key point: AI is an important, new tool, not a replacement. The companies that obtain value from AI implement these tools like an amplifier. Used properly, AI helps teams do meaningful work faster, without removing humans from the process.
That starts with transparency. If leadership cannot clearly explain why AI is being introduced, employees will fill in the blanks (often with worst-case assumptions). The simplest, most effective rollout message is: “AI is here to augment our work, not erase it, Let’s adopt it responsibly, together.”
Since AI Is Already Working in the Shadows, Policy Becomes Urgent
Many companies believe they are not using AI until they realize it is embedded in everyday platforms like Microsoft 365, Google Workspace, video conferencing, productivity software, and marketing tools. When AI is already part of the workflow, delaying policy implementation creates material risk.
From a legal and governance standpoint, it is important to note that improper handling of confidential information (for example, entering that information into public AI tools) could breach an NDA or other commercial agreement. AI use can also implicate privacy laws. Clients and customers may demand disclosures or restrictions around AI use, and some AI functionality cannot simply be “turned off” on a client-by-client basis once it is embedded.
AI policy is not optional. It should be viewed as operational infrastructure.
The Best AI Policies Are Usable - Not Buried, Not Bloated
A common failure point is policy design. A one-page policy rarely addresses real risk. A 40-page policy often creates confusion and paralysis. The most effective policies are clear, practical, and actively used.
Strong AI policies typically define approved tools, require company-managed accounts, restrict sensitive inputs, mandate human review, and evolve as tools and use cases change. Equally as important, policies must be paired with training. Giving teams AI access without guidance is a recipe for inconsistent results and unnecessary exposure.
Start Small: Don’t Buy a “Solution,” Pick a Real Problem
Companies do not need to begin with a shopping spree of tools. The best starting point is identifying a low-risk, high-value problem, for example something that reliably eats up time or slows down delivery.
More specific examples include summarizing emails, drafting first-pass documents with human review, synthesizing market and industry trends, and organizing large sets of contracts or diligence materials. Start with two or three tools at most, run a pilot, and track results. At this stage, the goal is not perfection, but rather learning what works in your environment and building internal confidence.
Prompts Matter, but Workflow Matters More
Better prompts lead to better output, but the real value of AI shows in workflow design. Vague instructions produce vague results; whereas providing clear context makes it more likely that AI will produce quality output work.
High-performing teams use AI for first drafts and synthesis, keep humans as reviewers and decision-makers, and create repeatable prompt structures for common tasks like research, SOPs, marketing, and client communications. AI can even help refine prompts by asking clarifying questions, turning unclear ideas into structured instructions.
Don’t Forget Recordkeeping: Prompts Can Be Discoverable
One of the most overlooked risks is that AI prompts and outputs may become discoverable business records. People often speak more freely to AI than they would in an email, but those conversations may later be discoverable in litigation or investigations.
There is also a practical concern: if valuable work product lives in an employee’s personal AI account, retrieval becomes difficult if the employee leaves. This is a strong reason to enable enterprise approved tools and accounts early.
Where to Go from Here
For organizations ready to move forward responsibly, a simple framework can be followed:
- Assign clear ownership for AI inside the organization
- Develop or revisit your AI policy
- Implement immediate guardrails around tools, accounts, and data
- Select one pilot use case with defined success metrics
- Train teams on safe use and practical prompting
- Measure outcomes and scale what works
The takeaway for business leaders is clear: do not be fearful but also do not be careless. The companies that succeed with AI will not be the ones that rush ahead without a plan, but those that move deliberately, with transparency, training, and policies that support safe, effective adoption.
FRB’s Artificial Intelligence Practice Group can help your business navigate the legal, regulatory, and risk challenges of adopting AI. To learn how we can support your organization, contact us at (516) 599-0888 or fill out the form below.
DISCLAIMER: This summary is not legal advice and does not create any attorney-client relationship. This summary does not provide a definitive legal opinion for any factual situation. Before the firm can provide legal advice or opinion to any person or entity, the specific facts at issue must be reviewed by the firm. Before an attorney-client relationship is formed, the firm must have a signed engagement letter with a client setting forth the Firm’s scope and terms of representation. The information contained herein is based upon the law at the time of publication.

