Block & Order | From Chatbots to AI Agents: Wei Chen Weighs in on What Lawyers Must Know Now


May 05, 2026

 

Wei Chen, Chief Legal Officer at Infoblox and Founder of the Atticus Project, joins hosts Kyle and Moish to explore agentic systems and how AI is transforming legal workflows, internet infrastructure, and global regulation. She breaks down how AI is evolving from prompt-based tools to autonomous agents, and why foundational systems like DNS are key to maintaining an open, scalable digital ecosystem.

Wei also shares practical guidance for legal professionals navigating AI, from building everyday AI habits to understanding privilege risks when using these tools. The conversation highlights the challenges regulators face in keeping pace, the importance of enforcement in AI governance, and what organizations must do now to stay ahead as the agent-driven future takes shape.

Subscribe to Block & Order to learn more about the latest news on the legal side of digital assets!

Follow B+O on Our Socials:

Twitter: @BAndOShow
LinkedIn: Block & Order
Instagram: @blockandorder
TikTok: @blockandorder

Chapters:

00:00 – Welcome to Block & Order
00:53 – The “Flossing Teeth” Analogy for Learning AI
02:20 – Introducing Wei Chen & Background
03:40 – Career Journey: From M&A to AI
06:00 – What Infoblox Does & Why DNS Matters
09:10 – Understanding DNS & Internet Infrastructure
11:40 – The Agentic Web & Future of AI Systems
14:50 – Open Internet vs Platform Lock-In
18:10 – AI Regulation: What’s Working & What’s Not
21:50 – Why Enforcement Is the Missing Piece
25:30 – The 3 Phases of Agentic AI (A2UI → MCP → A2A)
30:10 – AI Governance Inside Organizations
33:40 – Building AI Habits & Adoption Strategies
38:30 – Practical AI Use Cases for Lawyers
42:30 – AI Risks: Privilege & Confidentiality
46:00 – Final Thoughts & How to Keep Up with AI

Watch or listen to the podcast here:

YouTube   

Transcript:

**This transcript has been prepared automatically by AI and may contain inaccuracies**

Kyle Lawrence [00:00:14]:
So as Block and Order shifts its focus, I don’t know if shift is the right word from crypto and blockchain to including AI and including the experts in the space. And we certainly have one of them on their show today. I want to highlight a quote of yours that I personally find really interesting. I feel seen a little bit. It says, learning AI is as easy and as hard as flossing teeth every day. Now, as somebody who does floss every day, and it’s something that my parents basically just beat into me because they didn’t want me to have the problems that they have. And I know a lot of people who just. They go to the dentist, they spend their life savings on dental work, and they still don’t floss.

Kyle Lawrence [00:00:53]:
How did you arrive at that as being the nexus point for learning AI? This is fascinating.

Wei Chen [00:00:59]:
Yeah, that’s a great question. And I have to confess, first, I don’t floss every day.

Kyle Lawrence [00:01:05]:
Do you use AI every day?

Wei Chen [00:01:07]:
Oh, I absolutely use AI every day. And that’s how I’m thinking about this, because there was a time, right? You know, so I started using AI daily. Know at least, like, try to use AI daily about three. About three years ago when OpenAI ChatGPT first came out. And I have to say, like, you know, for the first six months or so, I fall off the wagon often. And I tried a number of things. You know, I put a yellow sticker on my laptop and it says, like, you know, use AI. And I like, you know, asked my husband to remind me, use AI and what kind of.

Wei Chen [00:01:45]:
But eventually I’m just like, you know, what if I didn’t use it today, I’m going to use it the next day. And even if I just use it once a week, it’s fine. And then I suddenly realized, actually, that’s my behavior. Flossing teeth, it’s really the same thing.

Kyle Lawrence [00:01:59]:
You know, I’ve tried the thing, which I don’t know if I could really show it, but you put it. You put something on the ceiling. So when I go like this, work harder on the ceiling. Doesn’t really work well. That’s a great segue into introducing our guest today, Wei Chen, the Chief Legal Officer and Executive Vice President of Regulatory Strategy for Infoblox, as well as the Founder of the Atticus Project. Please, everybody out there give a warm B and O welcome to Ms. Wei Chen.

Wei Chen [00:02:31]:
Thank you. Delighted to be here.

Kyle Lawrence [00:02:33]:
Calling in from Brussels, I respect the hustle. A lot of people, if they say I’m traveling, I can’t make It. Let’s postpone to next week. But not you. You’re an inspiration to all future B and O guests.

Wei Chen [00:02:44]:
Yeah, all of those forensic last minute, not having time to cancel or like, didn’t realize this is actually a video. Let’s put that all behind us. Yeah, we don’t need to talk about that.

Kyle Lawrence [00:02:59]:
That’s. That’s great. Well, why don’t you kick us off by talking a little bit about your journey, your career arc that led you to Infoblox, what brought you there, how long have you been there? I do see on your, on your CV just kind of a non sequitur that you were with Sun Microsystems, which fun fact about me, I’ve represented a lot of solar energy companies and renewable energy companies and I used to use their EDGAR filings to get risk factors. So that’s a life is full circle moment for me. So thank you for that. But enough about me. How did you arrive at Infoblox? And yeah, just tell us how you got there and how you, what sparked your interest in AI.

Wei Chen [00:03:42]:
Yeah, thanks so much. For those of you who have never heard of sound microsystem, that’s okay. Well, you just aged. You dated yourself, Kyle.

Kyle Lawrence [00:03:52]:
I do that to myself a lot. There’s a lot of gray news. I don’t know if it comes across in the video.

Wei Chen [00:03:57]:
I would say like, you know, growing up, I grew up in China, first generation immigrant, didn’t come to this country until I was 23. And I would say for the first four years of my life, it’s all about following the herd, you know, like, hey, going to the best school, you know, following like, you know, if other people wants to go to the US like, you know, why can’t I do it? You know, like, if other people go to the highest paying law firm, maybe I should try too, right? Until at one point, you know, and it has to do with AI, I realized actually maybe there’s something that I’m passionate about and I’m maybe good at it. And it’s all out of like a self interest, Right? You know, everything that everybody does with dedication, there is a little bit of self interest in them. My self interest in going into AI was really, I was an M and A attorney by training. You know, I, you know, I’m an attorney that. Oh, hey, Kyle. We have a lot in common.

Kyle Lawrence [00:05:01]:
Except I knew we were friends. I knew it.

Wei Chen [00:05:06]:
So thousands of documents, right? You know, like hundreds of thousands of pages that I have reviewed in my whole life as a M and A associate, even today you know, I’m, I’m a very hands on person. I don’t want to lose my trade, you know, know, so I, when I was a junior associate or like senior associate, I didn’t want to do it. I think it was a waste of time. And then when I become a client, I didn’t want to pay for it. Right. And back then, this was like seven years ago, people were pitching me that, you know, there’s all these AI tools that potentially with a click of a button and then now you can summarize all of these things very accurately. And it turned out that was not the case. And then seven years after, we’re still on this journey and one of the realization is actually AI is an evolving thing and it learns just like human learns.

Wei Chen [00:06:02]:
And lo and behold, I got down to this path of creating these very large scale expert annotated data sets. When you think about data sets, think about these like gigantic spreadsheets, right? You know, the role is like termination for convenience, you know, date of the agreement, change of control. And then the column is like hundreds and hundreds of documents. So we created all of those, had amazing group of volunteers, including law students. And I had the opportunity to work with like leading AI researchers, right. You know, because that’s the way that they can get their paper is published. And as a result I have the honor of like being listed as a co author of these academic papers that get accepted at the, the best of the best machine learning conferences, like NeuroPS, like ACL. And that’s how I learned.

Wei Chen [00:06:59]:
And then of course, living with a AI researcher at home also helps, you know, like talk about this like every single day, 24 hours a day. Um, so leading to all of that seven years after, you know, not only we have gone down from just doing a training data set, but you know, we went through the ChatGPT transformation. And now I would say, you know, sometime in January and February, we’re now at another transformation, which is the agentic transformation.

Kyle Lawrence [00:07:36]:
That’s, that’s great stuff and what a fascinating journey and thank you for sharing. Always nice to have somebody in the house with you who specializes in it to kind of help, you know, grease that process along.

Kyle Lawrence [00:08:19]:
so in, in your role as. As Chief Legal Officer for infoblox. What is your day to day? Like, what does infoblocks do? You know? Tell us a bit about that.

Wei Chen [00:08:29]:
Yeah, thanks so much. So I’m the Chief Legal Officer of a company headquartered in Santa Clara, California, and it’s called infoblox. Infoblox is the leader in networking, management and security. We focus on this particular thing called DNS Domain name system. People probably don’t know about what DNS is, but you use it every single day. Like infoblox.com get translated into machine learning, into IP address, which is just a series of numbers. And that’s how your computer get connected to the Internet. And everybody is using DNS every single day without knowing it.

Wei Chen [00:09:15]:
My journey at infoblox is one of the amazing transformation for myself as well. When I first came in, that was my first GC job. I was M and A attorney by training. And I started doing all of the other stuff, employment, litigation, you know, IP and all of those things. At that point I was like, oh, my God, I really stretched myself, right? And then lo and behold, you know, the CEO who hired me retired. And so nine months in, I got a new boss, which almost amount to a new job, right? So I have to reapprove myself and, you know, do all of those things. And I was like, hey, Scott, what do you think? You know, where could I add more value? And then he looked at me, he was like, oh, yeah, we were doing this like, Australia thing and I thought that they were able to change some of the standards and regulations. If we could do that, that would be like, really cool, right? And I was like, standards, that’s hard.

Wei Chen [00:10:20]:
And then he looked at me, he was like, that’s not hard. And just that one. Just that one response. Put me on this journey of becoming a fake IT until I make it government affairs slash public policy person.

Kyle Lawrence [00:10:35]:
Nice.

Wei Chen [00:10:36]:
Yeah. So for the past, I would say two years, it’s one of the amazing opportunities that I have never thought I would be able to do. So we were able to collaborate with leading, you know, researchers and then regulators, both in the US and in the EU. We help co author the Secure DNS deployment guide with NIST and we also help draft the Secure DNS guide for NIST 2, which is the Networking Infrastructure Security act in the EU. So one of the major regulations and we have become a trusted advisor to Inessa, which is the regulators in eu. And I’m now in Brussels actually doing my public policy trip to talk about this new initiative I feel very excited about. And it’s called DNS aid. And it’s one of the ways to keep the agentic web open.

Wei Chen [00:11:44]:
So back in the 1990s when Internet was first formed, there was a time, kyle, you’re gonna date yourself again. I know, I’m ready.

Kyle Lawrence [00:11:54]:
Let’s do it.

Wei Chen [00:11:57]:
Back in the 1990s when you have a computer and you wanted to get connected to the web, you have to call someone at Stanford and say, you know, can you put my computer on the list? And that’s how you get connected. Or over time people are like, this won’t scale, let’s create a directory. And then there’s these smart people suddenly realize, actually why don’t we do a decentralized distributed system called DNS. So you have these top level domains of.com.edu.gov and then the DNS will just first do the reverse lookup of like.com and they get to a whole bunch of servers and then you know, do infoblocks.com and you get to the infoblox servers and then you say server support.com and then you get to the support servers. So what we’re trying to do now is trying to add an agent, right? You know, so agent.infoblox.com and then you will know the DNS which is in everybody’s infrastructure will know they’re talking, this is agentic transaction, right? Agent traffic instead of human traffic. And it’s just super important because nobody owns DNS. Somebody owns the directory, but nobody owns the DNS. And if you don’t like one of the directories, you don’t like the way that they’re treating you, you don’t like the fees that they’re charging you, you take your name and you go to another place.

Wei Chen [00:13:26]:
Unlike in the social media or in these app store situation, you can’t because you build a whole bunch of presence there. And that’s very difficult, right? So that’s why I’m here. And I think that, you know, the world is very interested in making sure that this new agentic web is open and interoperable.

Moish Peltz [00:13:49]:
That’s so fascinating that you have this DNS approach here. My background’s intellectual property. So I got Interested in Internet domains kind of from that angle. And that’s right, exactly. But also that’s in my intellectual property interest is what got me eventually into, into crypto. But then I learned about domain names and these decentralized systems and the history of how the DNS system came about. And the way ICANN came about is this kind of global governance organization, for better or worse, which, which I think kind of there’s kind of a through line there. And I think you just completed the sentence from, you know, the early Internet to blockchain and other decentralized systems and now to this agentic framework where you have this, you know, how is it going to be governed? How are they going to communicate with each other? What’s the standard setting for that? How is it going to work across different jurisdictions? So I think, I think it’s just not really a question there, but I think it’s a really interesting through line of how these systems work.

Moish Peltz [00:14:50]:
And now kind of looking forward into an agentic future. The kind of uncertainty of the legal framework and regulatory structure, like what’s going to happen next? I think there’s a lot of answers I’m thinking about kind of philosophically. I’m just curious from your point of view, you know, the types of issues you’re seeing, you know, either at the, you know, you mentioned NIST and the eu, how are they approaching these novels, agentic DNS type issues, if at all, and what sort of governing standards are being set forth?

Wei Chen [00:15:26]:
I think the principles are very commonly accepted. But first of all, I want to acknowledge what an exciting time we’re in now. Can you imagine this is the time, it’s almost like the, the Internet time, the pre Internet time, right? You know, I don’t know, like the other day I saw someone and he was like, oh yeah, DNS. I was like, how did you know about DNS? He was like, when I was in law school, we had this debate about Internet governance, right? And we are at that time now, I think like, you know, the law students probably are having, you know, lively debate about like AI and how should AI be governed and everything. But the overall principle should apply, right? Interoperability, you don’t want vendor login and then there’s only two app stores in the world. Oops. That we are in that world. Can you imagine if we only have the two app stores and no Internet, what kind of world would that be?

Moish Peltz [00:16:22]:
Right?

Wei Chen [00:16:23]:
If we don’t do this right from a foundational perspective, then that’s where we’re going to end. Up. It’s not like. So I’ve been talking with people from these big AI players. They want interoperability, they want to keep the Internet open. Right. They want all the right things, but it’s just the market gravity of driving economic benefits.

Moish Peltz [00:16:52]:
No, everyone wants to be a platform with platform lock in, and they’re an aggregator and et cetera. Right. That’s the business strategy. So having this open source Claude can

Wei Chen [00:17:02]:
export any of those people?

Moish Peltz [00:17:05]:
No, no, not of that. Yeah, yeah, yeah. Well, that’s.

Kyle Lawrence [00:17:07]:
That’s an interesting way to look at it because we often talk on this show about, you know, what it was like, what it was like in the long, long ago in the. In the nascent Internet days and the early blockchain days. And you see everybody, you know, everybody wants to get their pound of flesh or get a piece of it somehow, but they’re not sure how. From a regulatory standpoint, I think that presents one, a lot of very interesting possibilities. But there’s also a natural, you know, I don’t know what the right word is, roadblock or obstacle that you have to overcome. That I think is that hits AI unique in ways that it doesn’t necessarily hit the Internet and other industries the same way. And that’s the speed with which it develops.

Wei Chen [00:17:46]:
Oh, my God.

Kyle Lawrence [00:17:48]:
Now you’re at the regulatory forefront. You’re in Brussels, you’re meeting people all over the world. What are you seeing people get right and wrong as they start to try to regulate these things, specifically, keeping in mind the fact that we once, like, okay, great, we passed the law today. By the time we get there, AI is blown through. Whatever the thing is that they’re concerned about regulating in the first place, it is now a whole different thing that they now have to look at. So what do people get right and wrong on this front?

Wei Chen [00:18:19]:
I would say that what people get right is everybody agree that AI needs to be regulated somehow or controlled. There needs to be some kind of AI governance. I think even the big AI labs now, you know, like, you know, this anthropic. What is called Methios Methos. I don’t know how to pronounce it. Mythos. Methios Methos came out. Yeah, it is.

Wei Chen [00:18:48]:
It is very powerful. I don’t know whether it’s as dangerous as what they claim it to be, but I have seen how bad these, you know, naughty, these really powerful models could become. However, I think what people are getting right is they understand that there’s the need to provide governance, provide trust. Right. However, I think they don’t understand AI behavior, especially this truly agentic behavior, enough to identify the right things to put new guardrails. You know, there’s old guardrails around that could apply to agentic world already, like interoperability, like, you know, audibility or like transparency, like you cannot commit, you know, a crime or fraud or harassment regardless what kind of weapon, what kind of tool that you use to do those things. But I think the enforcement, the enforcement is less, right? It’s almost like physically, let’s think about if people just throw litters everywhere on the street, unless it’s in San Francisco, like it’s probably going to be fined, you know, that those people probably are going to be fine or potentially have a misdemeanor. But when people spam, you know, spam you or like spam call you, I mean, the enforcement activity is a little bit less.

Wei Chen [00:20:37]:
And then when people are just like, you know, have the launch these malicious attacks, I don’t see, you know, legal enforcement against those criminals or at least like publicly, you know, execute, enforce those actions. So the deterrence, you know, I kind of feel like the deterrence from a cybersecurity perspective probably could be enhanced. Because now with this AI model so powerful, getting breach everybody is just going to be so easy. But a really easy example would be it’s very easy to open our garage doors, right? You know, our garage doors is not that high tech. If people could just like feel free to go open everybody’s garage doors and then steal stuff from your garage, I mean, like, you know, there will never be an end to police activities. And then people like, you know, suffering and then there will be like this whole industry that is going to be created to protect how, how you, how you, you know, harden your garage doors. So I think that, you know, having a little bit of a proactive legal enforcement is probably something that, you know, the regulators should start thinking about. Yeah, the other thing.

Wei Chen [00:22:09]:
Yeah, the other thing. Like for example, when I was talking with regulators about agent discovery, right? And it’s, it’s kind of foreign to them. They don’t understand that why agent needs to discover another agent. And I have to say, like, you know, I have to, because I’m seeing a lot of blank faces and it was like, why do you need to do that? Right? You know, like, how does it work and that kind of thing. So I have to do a lot of soul searching as to, hey, you know, what are we talking about? Are we talking about like something that’s so far down the future I can’t even draw a line from today to that place. So I think that I have just recently came up with this like three stage phase, three phase of agentic transformation. And we’re going to have to go through and I was inspired by one of the blogs and then the protocols that Google and Anthropic came out. And the first stage is there is a protocol called A2UI, right? We all have heard about like MCP and A2A, but there’s this new thing called A2UI and it’s essentially creating a protocol for agent to talk with human centric user interface, like coming into your website, log in with your human credentials, click here, drop down there, all of that painful things, right? You know, it’s almost like, you know, you want to elephant to swim.

Wei Chen [00:23:42]:
So it’s, it’s very painful to watch when you see agents struggle, you know, on your website, trying to like look around and find the right place to enter. If it’s just like truly agentic, they’re just going to speak a whole bunch of gibberish and we will not even be able to understand any of that. And it’s going to happen in milliseconds instead of those minutes and hours that you have to sit in there and watch. So that’s the agent UI that’s currently taking place. And then the second place is going to be AI agent 2 MCP. Right? MCP is essentially a protocol that allows agent to access tools and data sets a lot easier than maneuvering a human centric user interface. Right. And so everybody is still in this agent to UI place.

Wei Chen [00:24:39]:
And that’s why they feel like, okay, I don’t need to find other agents. Of course you don’t. Right. You know, you’re still trying to manage the human website, but you cannot be thinking about things that way because whatever that you’re building today in this like first phase of A to A2UI, it will have significant impact on the future of A2A when it comes along. And it’s going to come along very quickly once people started having this identity trust layer figured out, right? So one of the analogy I would use is like, you know, when you’re just creating a whole bunch of documents and then you just like name them randomly. You know, that’s what’s happening with agent today. Agent would be called like IA6728x8 dot, Amazon, AWS, you know, com slash, you know, JSON, right? And you’re like, what is this? Why is this here? And then for us, right? You know, we’re like, hey, you know, can we just like make that into a readable, like understandable thing that’s tied to your domain. So it would say support.agent.infoblogs.com why do I have to do that today? You’re going to be like, Kyle, like, wait, I don’t need to do that today because I only have like 15 agents.

Wei Chen [00:26:08]:
I remember every single one of them. Right. But think about when you get to the A2 a world where it’s not you who needs to remember them, it’s the, you know, outside agents who needs to go find them. So you better figure that out before the A2A, you know, phase comes along. And at that point you’re going to be dealing with trying to go back and clean up, I don’t know, tens of thousands of names. So that’s, that’s, you know, in a NutShell, what this DNS ADA is all about. And it’s all about like, you know, making analogies that’s approachable. Right.

Wei Chen [00:26:49]:
Understandable by people.

Moish Peltz [00:26:51]:
Kyle. It’s like the associate that, that goes into the M and A folder and starts naming things, but not the naming convention that you’re expecting.

Kyle Lawrence [00:26:59]:
Drives me crazy.

Moish Peltz [00:27:00]:
I know. Well, it’s so interesting because yeah, as you mentioned, the MCP kind of services and the government kind of regulatory was kind of where we started that. But then it’s my way of thinking about the AI governance portion of this is that yes, is the regulatory part, but there’s also, I think, the self regulatory part, the internal governance within organizations and among organizations. And the way I’ve been communicating about this with other, you know, clients and friends and the people that are working in the spaces, even for our own purposes is we need to have really good AI governance within the organization so that we have, we have the strong standpoint and feel confident in the way things are structured and data privacy and security and then you. But if you don’t have that kind of core internal governance and kind of privacy policies and build the structure around that. When you want to go really fast and have all these tools that are then communicating at light speed, it’s like, well, you’re kind of out in front of your skis. So I’m wondering how you’re seeing as you communicate with other attorneys, regulators in house counsel, what’s your sense of. We’re trying to do all these things really quickly, it’s moving really fast.

Moish Peltz [00:28:13]:
No one really understands everything that’s going on. But are people taking the time to step back and examine the way that their AI governance is set up and the way their policies are set up and the way that they’re training both lawyers and non lawyers to evaluate and deal and manage with these different risks.

Wei Chen [00:28:32]:
Yeah. I have to say, people really want to do the right thing, but without understanding where the future is or understand like, you know, how do you use technology to make things much easier, you know, all that effort that you put in place is probably just going to be a whole bunch of paperwork. Right. So I think when lawyers today, when governance professionals, when you think about AI governance and privacy rules and all of those things, you have to think about code, you have to think about SDK, you have to think about mcp, you have to think about toolkit, you have to think about, like, how do you, you know, policy enforcement that travels with your agent card, you know, and gets enforced at the DNS layer. That’s how you apply all of these policies, you know, in reality, otherwise it’s just a piece of paper that’s sitting somewhere, you know, and only the lawyers, you know, like, who drafted it have probably read it once and nobody else even know it existed.

Moish Peltz [00:29:42]:
It was outdated three months later because people weren’t talking about mcps. Yeah.

Wei Chen [00:29:46]:
Yeah. So I’m an example of this could happen. Right. You know, so, Kyle, I know that that drives you nuts. Like, you know, people don’t conform to the naming convention. Of course, if you want them to manually do it, you’re just, you know, it’s a lost cause, won’t do it. But if you give them a SDK that, you know, like every time that, you know, they create a document, they just like, like click a button and then boom. Right.

Wei Chen [00:30:14]:
You know, the name just automatically get generated as, you know, that, you know, lawyers have this, like, you know, desire to redline. So as soon as you generate something, they’re going to fix it.

Kyle Lawrence [00:30:26]:
Yeah, that’s how we get paid. Yeah, that is a good. That is a fair point. I mean, I’ve tried different things. Listen, I’m just fastidious in how I want my documents named. I don’t think that’s a crime. And I won’t apologize for it, certainly not to the associates. Your job is to make me happy.

Wei Chen [00:30:44]:
A little tool. Right. You know, you have to create a little tool that makes it so much easier for other people to adopt it. Yeah, yeah.

Moish Peltz [00:30:51]:
Here’s the standard. And every time you try and do the thing, it automatically enforces that standard is a lot better than. Well, I hope they do the standard this time.

Wei Chen [00:31:00]:
Yeah, exactly. So We’re. I’m actively working at the Linux Foundation. Linux foundation now has a Gentec AI foundation, AIF as well. Under there’s a number of working groups and I’m like going there commingling with like AI researchers, you know, standards people. And I learned so much, right. You know, everybody has different point of view. And I have to say like lawyers by our training has significant advantage because we do two things really well.

Wei Chen [00:31:34]:
One is we read these long text documents. We have this like superpower of reading these loan tax documents, especially M and A attorneys. Right? That’s right, yeah. Yes. And then the second thing is like we spit out words, right, you know, very easily and we synthesize it. So just have a little bit leap of faith that you can become technical. I remember there was a time I was like, oh, well, you know, but I’m not technical. And you know, my executive are basically like, what? That’s not an excuse anymore.

Wei Chen [00:32:11]:
Way you go learn it, right? So by just me now, you know, writing academic papers, you know, publishing these AI conferences and you know, driving standards and literally understand what they’re talking about. You know, it’s demonstration that this can be done.

Kyle Lawrence [00:32:33]:
That’s a great point. And I’ve seen that various iterations of what you’re talking about, you know, the I’m not technical excuse, like I’m not super technical, but I know which way the wind blows. You see this all the time. When firms, for example, just drawing on my own experience, if you have a way as a firm of saving your documents and you have now a new document management system, there’s training that’s involved and you have to sit down with people and people invariably are just, they get frustrated and they don’t know how to do this, they don’t want to do this. This is another version of that. So how do you, how do you kind of break through that stubbornness? That, that’s not, that’s not people, that’s just how people are. People get, you know, used to their own routine and stuck in their own ways. And I’m guilty of it.

Kyle Lawrence [00:33:16]:
I’m sure we all are in certain ways. How do you break through that? Because let’s face it, you know, gcs, I mean, this is, this is like the Internet. I, I can’t imagine practicing the law without the Internet. You can technically, I guess, but why would you do that? And it’s the same thing here.

Wei Chen [00:33:34]:
And I’m certainly not gonna say the CEO demands it, but that’s, you know, like a good incentive. But that Is. Is external. It’s not intrinsic, right? So there are two things. One is, you know, I’m. I’m. I’m a huge fan. We actually propose this called AI Habits.

Wei Chen [00:33:55]:
Right? You know, it’s not about the use cases, it’s not about the tools. It’s all about AI habits. What are AI Habits? You know, I teach these workshops, and there’s like 24 of them, right? But I don’t. I don’t. I only show the first five, and then I just quickly flash to the 24 and say, like, here’s what I have assembled, you know, for during the past three years and over time that you’re going to assemble your own, right? So these are very, very simple things, like open AI Tab or open AI App. When you get up, the first thing that you open your computer before email, before slack, before teams. Right? Really powerful. Similar to flossing teeth.

Wei Chen [00:34:38]:
If you forget one day, fine, do it the next day. Don’t make it so hard. Don’t be so hard on yourself. And then the second thing is, like, use voice, right? So, Kyle, you look at the ceiling and you work hard. I look at the ceiling because when I’m talking with my AI, I can’t be looking at the screen. I have to look at the ceiling as if, like, I’m talking, I’m talking to somebody else. And that just ramble, right? You know, I’m just like, blah, right? Have my whole thought, just get. Spit it out to the ceiling.

Wei Chen [00:35:17]:
And that’s actually a really effective way of conversing with AI. So the second habit is like, converse and not prompt. Don’t. Don’t take prompt class and don’t ask AI to help you prompt. That was so 2025, right? Don’t ask AI to prompt because it’s so 2025. And then last week, so I have this weekly substack, I write a blog called Fairly AI and last week I was coaching my daughters, and ironically, one of them is a junior in computer science in college, and it turned out that she doesn’t know how to use AI because the schools really. The schools, yeah, they’re not doing our students a favor.

Moish Peltz [00:36:06]:
A lot of schools say no, just no way. I know. Which I think is, I was in a law school last week, and I’m like, what? I asked the student, what’s your policy on AI? No AI. Okay.

Wei Chen [00:36:17]:
Yeah. I don’t want to give the teachers a hard time, but I think that those teachers who don’t allow their students to use AI needs to work harder they need to revamp the way that they test students. Right. So one of the things I do is when we interview candidates for the past 10 years, we give the candidate technical assessment. And we have never said you can’t talk to other people, you can’t, you know, like do research, you can’t, like, you do that. Because that’s what real life is. Real job are open tests. They’re not, they’re, they’re open book tests, they’re not closed book tests.

Wei Chen [00:36:58]:
Right, right. And so see, you know, for the past two years we’re like, use AI please. When you do this technical assessment, please use AI. And they turn out as the one single most effective indicator of a candidate’s growth, mindset, their technical ability. So much more indicative in terms of the accuracy of the signal compared to a resume or just a 30 minute conversation. And that’s where AI can really help. Why? Because the other day someone came to me and said, hey, should we use AI to screen resume? And I was like, unfortunately, that is, that’s a question that doesn’t have a yes or no answer because the answer is going to be resume screening itself. Resume itself is a, is a very blunt thing.

Wei Chen [00:37:56]:
Right. When human review resumes, there’s so much bias going into reviewing the resume. Are you coming from a Ivy League? You know, are you from a big law? Have you done this? Have you done that? I have seen people have the perfect resume. I have seen it so many times have the perfect resume. And then when they take the technical test, it was like, right. I don’t, I don’t know what you’re doing, you know, during those days.

Moish Peltz [00:38:25]:
Who are you?

Moish Peltz [00:38:25]:
Yeah, this isn’t the person on the resume.

Wei Chen [00:38:28]:
Right. So it’s an imperfect thing that human had to do because they didn’t have the time to, or the bandwidth to say like, everybody take a test and I’m going to review all of them. Right. But now we can, now we can. So why don’t we rethink what are the artifacts that we want the candidate to create and that is going to result in less bias because now we have the tools available to us. Right? So my answer is going to be like, don’t have AI screen, resume, have AI screen a variety of different things. Their online profile, their GitHub contribution, they’re like writing assessment. They’re like just taking a technical test and have AI assess them all.

Wei Chen [00:39:20]:
And hopefully it’s not going to be too much burden on the candidates either because the candidates can use AI, they can just use AI.

Moish Peltz [00:39:27]:
That’s great. And I think that overlaps with. I was lucky to be one of the students in your Stanford Law School AI Strategy for Legal Leaders course, which I thought was excellent and would recommend any one of our listeners to, to go and take. I know there’s other courses that you’ve given, but I just happen to have taken that one. And I think, you know, you mentioned, right, the, the daily AI use, like before you do anything else, open up the tab. That’s something I do every day. One of the things you said in the course that, that really resonated with me and I’ve now kind of conveyed to the rest of our law firm is you may have used AI and gotten really frustrated or seen it like not be able to do something that you thought it should be able to do and then you kind of gave up or stopped or were less enthusiastic. But it’s like, no, you have to keep using the tools and they change every three months and you have to continue.

Moish Peltz [00:40:15]:
So I thought that was very useful as well. And then there just were a bunch of other tips and tricks that I thought were really helpful and like thinking through like how to discuss the tools with an organization, like, yeah, it’s good, it’s easy enough for like, you know, you or me or Kyle to use a tool, but to get, you know, the entire team or, or other people you’re working with to use the tools in the way you want, I think is, is a much more difficult task.

Wei Chen [00:40:41]:
It is a difficult task, but you have to show people the benefit on the most painful thing that they do. Right? So right now, like, you know, two months ago, there’s cloud coworker and cloud code, you know, now cater to non coders as well. It’s truly agentic behavior. And if you have not been using AI at all, or if you feel like you’re falling behind, my recommendation is to skip all of that three year of perfecting how to talk to a chat box instead, you know, jump to agentic first. Right. So trying to get agentic does take a lot more efforts. So at a minimum, if you haven’t done anything with AI, it’s probably going to take you like two weekends just to understand and set everything up. And you probably want someone who has done this before to guide you through that.

Wei Chen [00:41:39]:
But my advice is like, pick that one thing. You would be like, oh my God, I would love to pay somebody else to outsource this. It could be like, you know, scheduling your kids, carpool. Right. You know, I have a colleague who has like five kids all doing sports. So my job, I know that I’m

Moish Peltz [00:41:59]:
starting to know that world now.

Wei Chen [00:42:00]:
So I haven’t thought my job next week. I was like, you know what? I know that you know, like you’re hesitant about like your privacy and everything, but if I can show you that, you know, the AI can help you do this, would you be willing to connect your emails and calendars to your claudiences? And they’re like, yes. I was like, okay, let’s go from, let’s start there, right? So I’m gonna go there, spend an hour. I’m in Brussels now. Next week I’m in DC doing very similar education for some agencies at the White House. Yeah, I, I take an hour. I set his computer up so that he has a cloud going. You know, I, I’m gonna set his cloud MD up.

Wei Chen [00:42:47]:
I’m gonna give him a Chief of Staff Agent. And you know, he would have the incentive, right. He would have the incentive to continue persisting because the pain of like trying to manually do all of these things is just too much. So take that. Yeah, yeah.

Moish Peltz [00:43:06]:
So I’m, I’m just wondering what you would recommend for, you know, someone out there that doesn’t have a ton of AI experience. And they’re, they’re a lawyer and they’re thinking through either in house counsel or outside counsel. What are the, you know, either the tools or the workflows for lawyers that you think are maybe the lowest hanging fruit, the easiest things to say. Okay, let’s, let’s start up Claude and, and apply some basic agentic, you know, framework to, to help improve this workflow and save you time throughout your day. Like for, for lawyers, where do you think they should start?

Wei Chen [00:43:40]:
Yeah, I would say like, you know, of course, pick that one pain point that you have. Right. You know, most of the people have a personal something that they want. Like for example, there are people who are super interested in, I can’t recall, it was like some kind of like a very obscure math concept. And I was like, I would never spend more than 20 minutes talking with my chat about that question, oh, what is time? Oh my God. And then he talked with, this was many, many years ago, well two years ago. And then he talked with the chat GPT back then. Right.

Wei Chen [00:44:24]:
You know, for an hour about what is time? And he, he, he is convert you just you, you, yeah, you see the power because as soon as you started digging into these very depth, in depth question, the answer coming back really is gonna astonish you. Right. And that’s when you’re gonna be like, oh, my God. Right. As we all know, I’m sure that you guys use AI a lot. AI that really doesn’t impress. When you ask it to do some mundane things, it doesn’t do a good job. Yeah.

Wei Chen [00:45:00]:
But if you ask it to do this thing that you. You never thought that they would be good at, they really surprise you.

Kyle Lawrence [00:45:08]:
At least the ones I’m using. I’m not a numbers guy, so I always ask it to help me with calculating cap tables and things. And it’s. They’re okay at that. It’s still not as good, but it’s better that I could do it. So, I mean, that’s a win. You know, 80% is better than 40%. I don’t know.

Wei Chen [00:45:23]:
Yeah, so I totally agree. A year ago, when I tried to do the waterfall, like, a very simple waterfall, it was like 40, but now I would say it’s 80. 80 is a lot. Right. You think about, I have to wait for the bankers to spend two weeks, you know, put together a spreadsheet, and it’s often like 80, and the other 20 is always, like, really critical. Right. You know, oh, you apply, like, escrow on option holders. You shouldn’t be.

Wei Chen [00:45:52]:
Right. You know, like, it’s kind of like, that was kind of really, really critical thing.

Kyle Lawrence [00:45:57]:
Yeah, it’s true. If. If. If we can shift gears a tiny bit, there is a topic I would like to cover before you have to go, because I know you’re in. You’re in Europe time. Some landmark cases that came out within recent weeks because we’ve talked a lot about general counsels and using AI. The concept of privilege is something that’s top of mind for not just lawyers. All our clients, they’re.

Kyle Lawrence [00:46:19]:
They. You know, I get constant emails, you know, because we. We’ll tell them about the Hepner case and the Glarco case that we’ll. We’ll talk about in a second, and we’ll tell them and. And say, caution, you don’t. Don’t use these things. And they’ll be like, well, what if I use this platform? And it’ll be one that I’ve never even heard of, and this is what we do for a living, which is really scary. So how do you encourage people or how do you educate people in the wake of the Hepner case, where, you know, the implications of using consumer AI tools without the instruction of your lawyer and Cole Barco, like, what.

Kyle Lawrence [00:46:51]:
How do you. How do you begin to broach that with people?

Wei Chen [00:46:55]:
I actually, this is how I interpret the case. But please correct me if I, if I’m wrong. Like, I hardly even qualify as giving legal. Legal advice these days. You know, most of the case is focusing on whether there’s a expectation of confidentiality, right. So first of all, there has to be a confidentiality clause in the agreement. You know, the, the. For the tool that you’re using and for all of these free tools that you’re using, there’s no confidentiality clause.

Wei Chen [00:47:30]:
And then even in the paid tools, actually there’s one. LLM provider, Gemini, did not have confidentiality clause in there, just the pro. The. The individual pro license, you have to negotiate to get it, right? So that’s crazy. Yeah, because they feel like, you know, hey, you, we agree not to train. Right. They think that’s enough.

Moish Peltz [00:47:56]:
Right.

Wei Chen [00:47:56]:
And then they’re gonna give you more icon. Confidentiality is different. And, and I don’t know what. I’m only speculating. Right. You know, so you guys can educate me. How do you reconcile the confidentiality clause in the agreement and all these privacy policies where it allows you to share it with a whole bunch of third parties? Right. In your privacy policy, any third party processor are allowed to.

Wei Chen [00:48:27]:
And then some of the language in the privacy statement by these AI tools, they’re so broad, they don’t need to tell you that. So what is the value of the confidentiality clause nowadays with those kind of exceptions? But that said, right, those third party sub processors or processors, each of them still have a confidentiality clause in place, right? So confidentiality clause always allow a third party to know on a. As needed basis. So, you know, this, this privileged concept is very much dependent upon whether you have. When you, when you, you know, ask the chatbot a question, are you expecting confidentiality? And is this, you know, like in consultation with a lawyer? But as we all know, that everybody has the right to claim privilege, even for themselves. Kyle, I’m just making things up. Is that right? I guess people could do their own right. Don’t, don’t, don’t cut this out.

Wei Chen [00:49:44]:
Okay? Yeah, I haven’t thought about whether they could just pro se represent themselves and then all the things that they can consult lm. If they have privilege on their own, they should be able to have privilege on those.

Moish Peltz [00:50:03]:
Well, I think, I think that’s actually one of the readings of, of if you can compare and contrast Hepner to the Gilbarco case. Yes, was one. Hepner was, was criminal and Gilbarco was civil. So the civil procedure rules.

Kyle Lawrence [00:50:18]:
Yeah.

Moish Peltz [00:50:18]:
Expressly contemplate that there, there may or may not be, like, different levels of confidentiality, but the, the need for discovery is. Is weighed according to the, you know, the standard relevancy. Right. So. So there’s a kind of like, versus the, The. The Hepner is like, it’s very explicit in the consumer tool that, like, if the government requests it, we’ll give it to them. Like, that’s what it says. Right.

Moish Peltz [00:50:40]:
So it’s like, well, the government’s requesting it, so it’s giving it to us. And there wasn’t that direct relationship between Hepner and his council. Right. Whereas Gilbarco, it was pro se, to your point, which I think completely changes, is like, well, I’m using it for, like a legal work product thing in my, in my own mind as a pro se, I think, defendant. So, like, there’s, There’s. There has to be some kind of work product around my own usage for my own legal kind of edification. So I think it’s very easy to read those cases differently just based on that. But there’s like, many other reasons, including the fact that Judge Rakoff was the judge in Hepner.

Moish Peltz [00:51:23]:
So I don’t know. I’m happy to dive deeper into those things, but I think that it gets to the point of, like, you can only control what you can control. Yeah. And if the one thing you control is making sure that you have enterprise tools and confidentiality provisions and you’re like, clearly using it within a closed universe, like only among you and your counsel, well, then that’s like, very different than what happened in Hept.

Wei Chen [00:51:44]:
Right, right, exactly.

Kyle Lawrence [00:51:47]:
And privilege is a funny thing because, I mean, if. If we, if you and I had an attorney client relationship right now on this video call, and there’s somebody standing outside your door who can hear the conversation, I mean, that’s, That’s a problem, you know, so you have to be mindful of something like that.

Wei Chen [00:52:03]:
So I would say, like, you know, yeah, people come to me and ask these questions all the time, but, you know, of course we can geek it out on how to interpret these case and making speculations on, like, you know, how the dicta, you know, means this and, you know, that’s the nuance of that. I think it’s just very simple rules. One is if you have, if you’re asking for legal advice from a AI tool, ideally you have enterprise license in place, or there’s these like, team licenses. Right. The team license, and then some. Some of them called business license, that they’re for smaller teams. Right. You know, you’re not committed for a year and then, you know, you can cancel, you know, on a monthly basis.

Wei Chen [00:52:49]:
Those have the same legal protection. They. They almost always use the same legal terms or if you’re doing API, API call. Right. You know, they’re also under the same legal terms as the enterprise agreements as well. So. And then what I do nowadays, like, you know, if I have something that I would always like before I ask the question, I would. I would.

Wei Chen [00:53:10]:
Title. I would change the title of that conversation. Attorney client privilege. I don’t know whether it helps or not, but, you know, I do that.

Moish Peltz [00:53:19]:
Well, it’s the same way you would do an internal, like inside an organization. You there. There would be like general communication channels and there’s a separate channel for attorney client privilege conversations. So it’s taking that concept and just shifting it into the AI era, I would assume.

Wei Chen [00:53:32]:
Yeah. And actually when. When I started doing that, I realized actually I have a over labeled attorney client privilege on even our team channel. You know, then I went back and I was like, our team channel actually has like just standard group team chat has like attorney client privilege. And I went in, I took that out because as we all know, right. You know, like, everything is privileged, but nothing is privileged. So. So we, we just need to be a little bit more disciplined as to, hey, when you’re talking about one case, keep it in one conversation.

Wei Chen [00:54:04]:
When you talk about another case, keep it in another conversation. And then it’s Mark, the whole thing, Attorney client privilege. I mean, it can’t be perfect, right? I’m sure that, you know, if things get litigated, there will always be arguments back and forth. But I think that those two things, one enterprise tool, two is like keep the conversation narrow, labeled, I think will help a long way.

Kyle Lawrence [00:54:32]:
Yeah, that’s a great way to put it. If everything is privileged, nothing is privileged. That’s a really succinct kind of capstone for everything that we’re talking about, not just today, but as a general part of our practice. As. As these tools continuously get rolled out, more and more people are using them. More of our clients are using them, more of our counterparties are using them. It’s. It’s the way of the world.

Kyle Lawrence [00:54:50]:
And you’re exactly right. There is more litigation to come. Gil Barco and Hepner are certainly not the last. So it’ll be really interesting to see as we are running low on time. Greatly appreciate again you joining us from. From across the pond and late at night over there do you have any final thoughts or things you want to share with the audience before, before we say good?

Wei Chen [00:55:11]:
Yeah. I would just say that AI is moving so fast at a dizzying pace, even I couldn’t keep up. Right. You know, I’m like literally doing this almost on a full time basis now. But that’s okay. You know, you don’t have to keep up with like everything and anything that’s ongoing at the same time. What you could potentially do is to take these shortcuts. Right? You know, you don’t have to spend three years perfecting, you know, prompting.

Wei Chen [00:55:40]:
Now you can just like go to a gentec. Don’t despair. Right? You know, even if you can’t set up your cloth, you know, on day one, that’s okay, right? You know, come back in two weeks and then, you know, try to get somebody else to guide you to do that for afternoon. Just, just always come back. Keep in mind, as easy and as hard as flossing teeth, and flossing teeth every month, like just once a month is better than nothing, right?

Kyle Lawrence [00:56:07]:
We don’t give legal advice here, but we do give dental advice. You should floss your teeth.

Wei Chen [00:56:11]:
That’s right.

Kyle Lawrence [00:56:14]:
Wei Chen, Chief Legal Officer of Infoblox. We really appreciate you coming by. It’s been a fascinating discussion and we’d love to have you back on, you know, when the next big lawsuit comes out or some regulation comes out. We would love to hear your thoughts. We thank you so much for your time.

Wei Chen [00:56:28]:
Great. Thank you.

Moish Peltz [00:56:30]:
Thank you so much.