Moral Machine – AI Data Privacy, Data Security, Compliance & Shadow AI

Zero Data Retention? AI Privacy, Compliance & Shadow AI with Cathy Miron, CEO of eSilo
What really happens to client data when you use tools like ChatGPT—especially if you click “delete” or enable zero-data retention? In this episode, I’m joined by Cathy Miron, CEO of eSilo and a nationally recognized expert in data protection and cybersecurity, to unpack the privacy, security, and governance realities behind modern LLMs. We discuss how litigation and vendor policies can complicate “private” chats, why backups, logs, and engineering choices often outpace contract language, and practical ways lawyers and regulated organizations can use AI without compromising confidentiality or privilege. We cover de-identification workflows, BAAs and vetted tools, API nuances around ZDR, and the human firewall—acceptable-use policies, training, and curbing “shadow AI” with sanctioned, enterprise subscriptions. It’s a candid, pragmatic guide to balancing innovation with risk. These issues are further discussed in an article on the Moral Machine Blog, “The ChatGPT Panopticon: Black Mirror Meets Rule 1.6”
Cathy Miron is a seasoned leader with experience in Fortune 500, small businesses, and non-profits. She is known for her expertise in data protection, cybersecurity, and digital transformation.
She has delivered multi-million dollar improvement programs in large organizations, re-imagined digital customer experiences, and consulted on multiple global security and compliance initiatives.
Cathy can be reached at: https://www.linkedin.com/in/cathytmiron/
Watch or listen to the podcast here:
Transcript:
**This transcript has been prepared automatically by AI and may contain inaccuracies**
Chris D. Warren [00:00:00]:
So welcome back to the Moral Machine, where we explore the intersection of law, technology, and ethics. Today we’re diving into one of the bigger concerns facing lawyers and organizations experimenting with AI, which is what happens to my data when I use a tool like ChatGPT. And this is especially in light of the lawsuit against OpenAI from the New York Times. To help unpack this, I’m joined by Cathy Myron, CEO of Esilo. She’s nationally recognized as an expert in data protection, cybersecurity. She works with organizations navigating compliance and security challenges, so she has a unique perspective on how AI is shaping privacy risk. So, Cathy, let’s just jump right in.
Chris D. Warren [00:00:37]:
What’s the expectation of privacy and data privacy specifically? When people hit delete on their ChatGPT, they think that information is gone, but that doesn’t seem to be the whole story. What’s the main concern that’s facing your clients, and what are you seeing as challenges to keep their data private?
Cathy Miron [00:00:56]:
Yeah, well, it’s ultimately privacy, right? Anytime you use a third-party tool, whether it’s an LLM or an AI-powered widget, you have to give up a certain amount of your data to that system in order for it to do its thing. And unfortunately, some of the recent case law that is happening and changes that are hitting this landscape mean that when you thought that you were having a private chat or maybe selecting, you know, “delete my data” or “delete this document” after I’ve uploaded it, unfortunately those companies are now—and OpenAI specifically as well as others—are being mandated to retain that information for those ongoing cases. And so where you thought that you had privacy because you were paying for a subscription, you weren’t trying to use the free one, and so because you were paying, you had the option to turn off certain checkboxes, those checkboxes now don’t mean what you thought they meant. And unfortunately, we see this a lot in technology.
So I’m always a bit of the skeptic in the room, and everyone really gung-ho on something new. I’m always sitting back thinking to myself, well, how might this go wrong? And just as you’ve probably seen, the AI landscape is changing so fast. These startups and new companies are racing to market, to raise funding, to try to capture market share. And I know from experience that oftentimes one of the first things that falls by the wayside when those things happen is the security and compliance and privacy side of it. There’s always a little bit of catch-up that has to happen. I think we’re starting to see that now.
Chris D. Warren [00:02:26]:
Right. So even if the paperwork’s in place, the contracts have been written and signed, the technical reality is not matching what the expectations are of the end user. So OpenAI agrees, I’m not going to retain your data, but is now retaining your data. Is there any protection at all here? Are there any mechanisms that are in place to allow people to use LLMs like OpenAI without sacrificing all their privacy and the security of their data?
Cathy Miron [00:02:52]:
Yeah, there’s a couple of different tiers, I guess I would say, of ways that you can approach this. The average small business, small law firm, even mid-sized company isn’t going to be rolling your own LLM, hosting it in your own environment, in a private cloud that you control, or you can have your own governance over the inputs and the outputs and know with certainty where that data goes, right? So because that’s the ideal solution, but it also takes big bucks, and most of us are consumers of those AI tools and not necessarily building one, you know, for internal use.
So if you are going to be using what I would call an off-the-shelf AI tool, I think the best thing that you can do is to de-identify or anonymize your data ahead of time. It could be as simple as putting whatever content you want to put into the LLM into a Word document, do a find and replace, right? Anonymize all the names, any of the important identifying facts, run it through the LLM, and then take the output back and undo the find and replace. I mean, that’s the simplest solution, and it doesn’t cost you anything.
There are third-party tools that you can use. So if you’re in healthcare or finance or one of the other highly regulated industries, there’s a couple of things you can do. One is you can purchase a software tool that will help you on a large scale de-identify or anonymize that data ahead of time. That way you can do that at scale. And then the other thing is, depending on what industry you’re in, as long as you’ve done proper vetting, there may be approved AI tools for your specific industry that have already been evaluated and vetted. And what I’m thinking about is government contract, right? So there are certain tools and platforms that meet rigorous security standards set by the government. And so you want to make sure that if FedRAMP or if CMMC applies to you or to your clients, that you’re using tools that are approved for those frameworks.
So there’s a couple of different ways that you can address this. And of course, contracts, right? I’m talking to the lawyer—the contract specifics, you’re the expert. But I always caution folks: it says no matter what the privacy policy tells you, what the terms of service are, what the contractual terms are, I have consistently seen that the legal language is often catching up to what the engineers are doing and changing day by day. So maybe when the document was written, everything was factually accurate—we don’t share your information, we don’t retain it—but as we’re seeing right now, sometimes that changes. And so even though you thought that you had zero retention, you thought that your data wasn’t stored by those third parties, we often come to find that that is in fact not the case.
Chris D. Warren [00:05:37]:
So let’s talk about zero data retention, or ZDR…
Chris D. Warren [00:05:37]:
So let’s talk about zero data retention, or ZDR. That seems to be the safe harbor in addition to enterprise clients. So you have a bucket of people that are creating their own specialized LLMs for their businesses, and then most folks probably don’t fit into that bucket. And then you have vendors that are offering zero data retention policies as part of their package for their products that have agreements specifically with the LLMs that the API calls are not stored in any vendor log. So there’s some protection of your data. What’s your position on that, and is it safe for people to input their data into these interfaces that interact with these LLMs?
Cathy Miron [00:06:17]:
I certainly think it’s better than nothing. Meaning, could you make an argument that says, well, I understood from the vendor that these were the certain types of API calls that were supposed to be zero retention, and so therefore those were the only ones I used, so any data retention was outside of my hands? Perhaps you could make that argument from a legal standpoint.
But what I would tell you is that the devil’s always in the details. So as an example, you have a client in the healthcare space, right? So a lot of sensitive protected health information, PHI. They do use AI as a core part of their product—that’s actually the magic behind their platform. So when helping them to architect that solution, we said, one, we need to make sure we have a business associate agreement with OpenAI. And if you read the documents—and a lot of startup founders do not do this, hopefully their lawyers do—it tells you that there are certain types of API calls or interactions with the LLM that they can guarantee supposedly zero data retention, and then there are others that they can’t.
So if you know what is included in zero data retention, you have to be very careful that your developers don’t use anything that might actually be retained. But additionally, we don’t think that that’s enough. So we went through the trouble of finding a service to de-identify PHI, just because the way you think about it is, regardless of what the language of the terms of service says or what the vendor purports to have happening behind the scenes, I’m in the backup business, right? That’s how our company started. We take backups of everything.
So even though you’re not supposed to retain anything, I know that there’s security logs, right? I know that those are going to contain some information. There’s going to be backups to make sure that the service is available. There’s going to be all different mechanisms behind the scenes. So I just assume that any third-party service that I use or my clients use is going to be breached. And how do you minimize the impact and minimize the likelihood to the extent possible? And then that, to me, makes it an acceptable risk to say, look, let’s go reap all the benefits of AI, but let’s just do it smartly and take as much into our own hands that we can control as possible.
Chris D. Warren [00:08:31]:
So that sounds like a real compliance nightmare, especially for industries like you mentioned—finance, healthcare, legal. So beyond BAA and ZDR agreements, like you mentioned, tokenizing or anonymizing your data, is there any other suggestion you would have when we’re forced into this kind of dystopian, Black Mirror episode timeline, where there’s a mass national surveillance program run by the largest AI company in the world against their will? What can we do here? What are some best practices that the average person can engage in that will protect themselves and their data?
Cathy Miron [00:09:07]:
I think aside from the legal controls, right, from contracting agreements and the technical controls that I described, I think it always seems to be the last line of defense is your human firewall, right? So educating your staff on what the risks are, first of all, what some of these new developments mean, and what is acceptable and not acceptable to put into these LLMs, which LLMs are approved for use at your company.
So if you have an IT team or security team that has reviewed all of the different options out there and is choosing the ones that are more compliance- and security-oriented, those would be the preferred. But if you don’t have an AI acceptable use policy, if you haven’t done targeted discussions and training with your staff multiple times, I can guarantee you that your staff are using free tools or tools that you’re not aware of because it makes their lives so much easier.
And so it’s better to have paid enterprise subscriptions or business subscriptions that you try to funnel all of that activity into, because the last thing that we ever want to do is stifle innovation in the name of security and compliance. I think we have to figure out a way to live in the “and.” It’s, I want AI and I want security. And unfortunately, because the landscape is just changing so fast, you have to really keep your eyes and ears open as things develop and change.
And I do expect we will see more regulatory guidance around this. I do expect we will see better auditing standards and testing standards so we will have a better understanding of who are the good vendors and the safe vendors. But for right now, it’s a bit of the wild, wild west. And so you just need a good person who can help you navigate that and your leadership team.
Chris D. Warren [00:10:53]:
You’re speaking my language. It’s all about the human governance, about the risk management. It’s about being proactive with privacy and training, pragmatic training, and providing the right tools for the right job. In the legal profession, we have an unbelievable amount of shadow AI or bring-your-own-AI-to-work, right? Especially in scenarios where companies are just outright banning it. It leads to essentially a prohibition of the digital age, and we know how well that worked out in the past.
So, Cathy, thank you so much for joining me to discuss this very important and timely topic, and thanks for sharing your insights and thoughts. And thank you for listening to the Moral Machine.
Cathy Miron [00:11:28]:
Thanks for having me, Chris.
Views expressed on Moral Machine are the author’s own and do not reflect those of the New Jersey Supreme Court Attorney Ethics Committee (District VI) or Falcon Rappaport & Berkman LLP.




