Working Smarter

Episode 2: Abdi Aidid on being your most creative, dynamic self

May 15, 2024
"We're at a place right now where the tech makes better performance so possible that we really don't have the same excuse that we previously had to do things in this kind of analog way. I think one of the things that we're learning now is that there's a possibility of an intelligent division of labor between human and machine."

For our second episode of Working Smarter we’re talking to Abdi Aidid—a University of Toronto law professor—about AI, the law, and the “possibility of an intelligent division of labor between human and machine.”

Aidid is interested in how AI can help legal professionals be better at their jobs and improve the delivery of legal services. He’s also the former vice president of legal research for Blue J—a legal tech company that uses machine learning to help lawyers review, analyze, and synthesize information faster and more efficiently than they could on their own—and the co-author of The Legal Singularity: How Artificial Intelligence Can Make Law Radically Better.

Hear Aidid talk about using AI to spend more time on the things humans do best, how lawyers are already using AI-powered tools, and why he thinks AI could actually make legal services more accessible and better able to meet people’s needs. 

Show notes:

~ ~ ~
 

Working Smarter is a new podcast from Dropbox about how AI is changing the way we work and get stuff done.

You can listen to more episodes of Working Smarter on Apple Podcasts, Spotify, YouTube Music, Amazon Music, or wherever you get your podcasts. To read more stories and past interviews, visit workingsmarter.ai

This show would not be possible without the talented team at Cosmic Standard, namely: our producers Samiah Adams and Aja Simpson, technical director Jacob Winik, and executive producer Eliza Smith. Special thanks to Benjy Baptiste for production assistance, our marketing and PR consultant Meggan Ellingboe, and our illustrators, Fanny Luor and Justin Tran. Our theme song was created by Doug Stuart. Working Smarter is hosted by Matthew Braga.

Thanks for listening!

Full episode transcript


For better or for worse, my concept of the legal profession has largely been shaped by film and TV. There are the walks and talks, of course. So many people that want the truth—but apparently can’t handle it? And don’t get me started on the banker boxes. So many banker boxes, just overflowing with documents. Who has time to read all that? 

As someone whose own chosen profession—journalist, writer—has always been very correctly and accurately portrayed in popular media, I’m sure everything I’ve just described is exactly how our legal system works.

Although…when it comes to those banker boxes, that might not be so far from the truth. Because what those legal dramas don’t typically show you is the sheer amount of work that goes on behind the scenes—the reading and the writing and researching and summarizing and sifting and on and on and on. It sounds exhausting. 

If you’re a human, that is. 

You’re listening to Working Smarter, a podcast from Dropbox about how AI is changing the way we work and how we get stuff done. I’m your host Matthew Braga, and on today’s episode we’re going to talk about what it’s really like to be working in the legal profession today in the age of AI.

I’ll be talking to Abdi Aidid, a professor of law at the University of Toronto whose work and research is focused, in part, on applications of AI in the legal profession. And not just any applications, but the sort of helpful, time-saving tools that could actually make legal services more accessible and better able to meet people’s needs.

And if lawyers get a bit more time to work on their dramatic courtroom speeches? Sure, I guess that’s cool too.

That’s coming up next on this episode of Working Smarter.
 

~ ~ ~


Thanks for joining me, Abdi.

Thanks for having me, Matt. Really appreciate it.
 

So to start off, maybe you can tell me a little bit about who are you and what you do?

I'm a law professor at the University of Toronto Faculty of Law. I teach a variety of law courses—tort, civil procedure, privacy law. I'm of course also a lawyer, and for years before that, I was a legal technologist. I served as the vice president at a company called Blue J, a Toronto based startup that builds machine learning and AI-backed research and analytics tools. 

So I'm someone who's thinking a lot about the future of the legal profession, thinking a lot about the ways in which AI can actually be deployed and absorbed safely and ethically to make us be better at our jobs and also deliver on the promise of the law.
 

So I want to zero in on that phrase that you just said, be better at our jobs. What would you say are the biggest pain points for working lawyers today that AI is uniquely positioned to solve?

I think some of it is playing catch up to other industries, right? So if you think about the world of insurance, for example, they've been using actuarial methods to forecast outcomes for generations. If you think about the world of medicine, there's been a quality movement. There's been an empirical turn in medicine years ago where they developed standards of practice, standards of care, where the quality of a physician encounter is evaluated—and we haven't had that same thing in the law.

But we're at a place right now where the tech makes better performance so possible that we really don't have the same excuse that we previously had to do things in this kind of analog way. I think one of the things that we're learning now is that there's a possibility of an intelligent division of labor between human and machine.

So what are computers good at? They're good at computing, you know, to put it simply. They're good at synthesizing large volumes of data, extracting insights, looking within more information than we're capable of looking in, and finding the right kinds of things that we need to perform our jobs effectively.

We're good at the judgment side of things. We're good at the strategy side of things. We're good at the problem solving side of things. Is there a marriage between those two competencies that can make us better serve our clients? I think the answer today is yes. 

You have tech tools that are able to predict how future courts are likely to rule in new legal situations. You know, I worked on building a couple of those for tax and employment law. That's an example of a really effective use of computing power. Why? Because if someone knocks on my door and asks me a legal question, I might have an instinct about how to answer it. I might have all kinds of experience that tells me what the answer is. But really I'm going to have to go back and do some confirmatory research.

And the rate-limiting factor on the quality of my advice is, how many cases can I read in a day? Whereas the tech can do that rather quickly, and maybe cue up for me what the most relevant pieces of information are. So it's really about just getting smarter.
 

Maybe you can help break things down a little bit for me. I talk about the law and the legal profession as if it's one specific thing—as if it's a monolith. But of course the legal profession has lots of different types of lawyers, lots of different types of practice, lots of different types of supports. How do the implications of AI differ in different parts of the legal profession or practice?

There's a lot of people that are involved in what you might call the legal profession, or the legal system in general. You have lawyers, but they're only one component of it. You have judges, and then you have all manner of professionals whose work implicates the law. They're working with legal information. They're needing to perform some kind of legal analysis. They're needing to make sense, heads or tails, of what the law is. And all of those folks in that entire ecosystem stand to really gain from what AI can offer. So for instance, if you can get to a place where you're using AI to predict how future courts are likely to rule, you're much more likely to have a sense of your legal rights and obligations.

When people ask you a question as a lawyer—when someone comes to me and says, “Hey, I want to know about this area of law”—they're not really asking me for a description of the contours of a doctrine, right? They're asking me “What's going to happen?” Like, “What does this mean for me? Am I going to get in trouble?” And so that that sort of predictive piece is what AI can really help with. 

And then there are lawyers who just have all kinds of tasks to perform that involve the kind of computing power that human beings aren't that good at. I'll give you an example. If you're someone who is representing a company that wants to acquire another company—you're a mergers and acquisitions lawyer—you've got to go through maybe millions of pages of information about that target company. You might have to go through all the contracts to make sure that you're not inheriting significant liabilities, or at least be aware of which ones you are inheriting.

That's the kind of work where it's only nominally “legal services,” because we've decided that lawyers are the ones who should do it. But that's the kind of thing where we can say, “Okay, this is drudgery we'd rather not do.” And if there is something important about this, we can use the technology to surface that material for us, so we can kind of triage, and it could do the work of the review in the first instance, right? 

But it's not just about economizing, it's also: do we really want to be tying up our creativity and our brain power in the kind of things which computers are probably better at?
 

I'm really glad that you mentioned creativity. When we get rid of that drudge work, the repetitive tasks, the things that we don't really want to spend most of our time doing, what does that free people in the legal profession up to do more of instead?

This is my favorite question to talk about, in part because it really helps me connect with why I became a lawyer to begin with. I'm hoping folks that are listening in whatever sort of professional environment they’re in can take a moment to really think about why they chose to do what they do.

I didn't become a lawyer to spend all day long in the legal research universe and try to find the exact right case for the proposition that I'm trying to argue. I didn't become a lawyer because I wanted to review endless amounts of contracts. You know, I wanted, like a lot of people listening, to have a rhetorical flourish in the courtroom—a “You can't handle the truth!” moment from the movies. Or, I wanted to work on problem solving, or delivering value to clients, or pursuing the ends of justice, right? Maybe technology can help me perform some of the legal research, but the work of strategizing as to what the right arguments are for that particular judge is something that comes to me through experience.

The encounters that people have in the context of lawyering—very often, people are coming to us in the worst situation of their lives. Or, at a minimum, the work of trying to help them navigate that problem is not work that technology can effectively do. It's really about: can it start me on a higher floor when it comes to the computational tasks, so that I can be free to be my sort of dynamic, considerate, thoughtful self?
 

Essentially, you're talking about things like intuition, empathy—like, the human side of the profession is the stuff that we could be freed up to do more of.

True. And actually, even helping us think more deeply about the law. When I say lawyers can be more creative and dynamic, I don't only mean with respect to bedside manner or our communication or handholding of clients. I actually mean that we can provide more holistic analyses of the law. The key thing is that you're freed up to do that by not having to be so attentive to the stuff that computers are better at.
 

Can this make legal services more accessible to communities who may not have had as much ease accessing them in the past? If lawyers have more bandwidth to do a wider variety of type of work or more work? 

Yeah, the access to justice issues are... massive. I mean the legal profession doesn't agree on much, but there's a virtual consensus that we have an access to justice crisis. That often means courts being clogged up and people not getting adequate time in front of the legal process, but it actually often means an access-to-lawyers crisis. There are a lot of lawyers right now who are, for all kinds of economic reasons, competing to service what you might call the top end of the paying market, right? And there are lawyers who are engaged in a volume business where they're trying to take on as many cases as possible because they know their clients are not the kind that can afford to pay a lot of money.

And so, think about how technology can make a difference. One of the ways it can make a difference is by narrowing the gap between those two kinds of lawyers—the ones who service the top end of the paying market and the ones who represent people that don't have a lot of money to pay. That latter lawyer is much more likely to be also resource-constrained themselves, right? So think about it. If someone knocks on your door and says, “Hey, I have this problem. I was terminated from my job and I feel like there was some discrimination involved,” and you're a county lawyer and you have a storefront. Maybe you're going to basically be going at it alone, and the quality of the advice that you can offer is contingent on how much work you can do to perform research and come up with arguments, et cetera.

Now the company—let's say it's a big multinational company that just terminated your client. They're hiring a law firm that has maybe a hundred associates that can work on it, and maybe they'll have a couple whose job it is for the next week or so to get to the bottom of the issue. And they're doing it in a state of the art facility on the 30th floor of a skyscraper with all of the latest technology and the software and the legal research databases. If we can build AI tools—which we are building right now in the profession—which can forecast the likely outcome of a case, synthesize all of the case law in the area in a matter of seconds, then you've effectively eliminated that labor advantage that the large law firm has. So in one way, AI can be corrective of asymmetries.

And then, of course, there's the other piece of it which is the lawyers that sort of depend on more cases for their economic survival can actually take on more because they could be more efficient along the way. And that might also translate the cost savings for clients. Why? Because right now, law is often a margin business, right? I remember during COVID, shortly after there was a sort of reopening of schools and stores, I decided to brave it and get a haircut at one point. And I remember my barber, he brought out a shoulder massager and then he put, like, a eucalyptus-infused towel on my face. It was this guy I went to for years, and it was like a total brand new thing. I was like, “Hey man, what's, what's going on here?”

And he actually was really honest with me, and he goes, “Hey man, we're not getting the same volume we used to get, and so we need to get more out of a given client.” And I'm like, okay, maybe you shouldn't say this to too many people. We were friends, so it was okay for him to sort of confess that to me. But a lot of legal services depend on protracted engagements. And if we can get to a place where we can pivot from margin to volume, where a given interaction can be so economical that it's not overburdening the lawyer, then those lawyers can take on more clients. Because often the sort of economic arrangement is that people pay hourly. If the stuff takes fewer hours, then people will pay less. And there's a chance that we can have more affordable legal services. 

And that's not even including the fact that we can make public interest lawyers—lawyers that are working pro bono, lawyers that work for government institutions, lawyers that work for not-for-profit organizations—also more efficient and more accountable.
 

I'm curious, you've been working in this space—the intersection of law and AI—for more than five years, right?

Yeah, I've been thinking about the issue as a researcher since 2016, but I've been developing legal tech tools since 2017, 2018.
 

How has the legal community's perspective on AI changed since you started, from then to now?

Oh man, so this has really been fascinating to look back on. So when we first started talking about AI in the law, and showcasing legal technology tools that were helping to make lawyers more effective, there was definitely a culture of denialism in the profession. And it was mostly along the lines of “this tech can't do what you're suggesting it can do.”

So, for example, if we're saying that we can train algorithms on all of the historical case law to predict legal outcomes, people were often talking about, “Well, there's something esoteric about legal prediction, and there's a kind of gut feeling that's involved. And there's some skills that go beyond mere computation involved.”

And even though all of that was true, the technology was still offering something that was a step beyond what we can sort of do on our own. Then as we started to prove the concept more, as people started to see the tools being effective in limited circumstances, there became a culture of, “Okay, it can only do work serviceably. Like, even if the tech can do what you're saying it can do, it can't do it as well as I can.”

For a long time that was true. Now, my response to that is, okay, well, all you've done is prove your value in saying that the tech can't do it as well as you. All you've done is talk about your value, which I might agree with. But if the tech can do the work serviceably or in a way that's a little bit better than, say, not having legal services, or not having any support, or only using analog tools, then that might be a net positive for justice.

Now with the advent of ChatGPT, there suddenly became this feeling of, okay, the tech can do this in a way that might create some existential risk for the profession. And so what used to be the kind of defensive hubris kind of gave way to a professional anxiety and a labor market anxiety that I think kind of pervades the conversation today.

So even though there's a lot of fruitful discussion about ways to absorb the tools, it's all happening against this backdrop that this might mean some labor market contraction among lawyers. And part of the challenge here is talking about the benefit of the tools, but also telling lawyers that they have so much more to offer than the tools do right now, and it's really about incorporating them. It's about making it work for you.

And I think there's a moment of reckoning for the legal profession, which is we've never really had to defend our value in the world. Doctors have a very compelling case for why they exist. Lawyers often feel like they have the same compelling case, but we're at a moment right now where we might have to defend it. And I think that's a good opportunity—not only to signal to the public what we have to offer, but also to think hard for ourselves about what we have to offer, so that we're not misallocating our resources. So that we're offering services that are of quality.
 

I’m curious, what did the legal profession think when ChatGPT arrived?

I think the legal profession really had a curious distance from ChatGPT. I think we were, like everyone else, paying attention and looking at some of the whimsical applications of ChatGPT. You know, I was using it to write haikus for my daughter. And, I was looking at examples of people using it for simple writing tasks—drafting letters, the kind of stuff where you might not think of it as threatening to the core of your professional identity.

But then the horror stories started to hit the press. There probably isn't a lawyer in North America right now who's thinking about AI who isn't also familiar with the story of this lawyer in New York who used ChatGPT for a court filing and ended up submitting fake cases to the court. That, I think, has led to a really fruitful conversation about the limitations of some of the AI tools. You know, you could look at it two ways. You could say that lawyer in New York should have never used ChatGPT or you could say they used it wrong. And when it comes to this limited subset of information for which they had to be absolutely on-point factually about, they needed to do some of their own work.

I think some people come by that conversation with some fear. Maybe they want to use the tools, but they're afraid of looking like our unfortunate friend in New York. Or they don't believe in the tools at all, and they use this as kind of point in their tally. It's a confirmation bias that these tools are no good, and so that has been really the template for the conversation. And so for people like me who have been saying, “Hey, there's some benefit to these if we're properly disciplined about how to use them”—it's been sometimes like pushing a rock up a hill. But the curiosity is there.
 

For a lot of the people that you're talking about, ChatGPT and these large language models were the point where they suddenly started to consider the implications of AI in the legal profession. You, of course, have been thinking about this field and this technology for a lot longer. I'm wondering what convinced you that AI was the technology that could change the way that legal work has done for the better. Was there a moment, or an event, or something that helped shape your thinking the way that ChatGPT is doing for a lot of people now?

There wasn't really a moment for me that made me go down this path. It was really my displeasure with the state of affairs in the legal profession. You know, people need lawyers to solve their legal problems. Lawyers have a privileged vantage point, they have a set of tools, and very often the legal system that people have to interact with is complex and peculiar in a way that makes it hard for them to manage.

So if you get into legal trouble, or if you need something, or if you're even sometimes trying to just receive something you're already entitled to, you very often need a lawyer to help you navigate it. But, the fact that lawyers are not available to most people is kind of a jarring truth that's in the background of all this. It means that despite this profession being of vital importance, it's only really available to either a privileged few, or the rest of us have to settle for competing for the limited subset of free or pro-bono or subsidized legal services that do exist. And so that, to me, seemed like a bad situation.

I used to practice law. I was a commercial litigator. And even though I was doing well and making a good salary, I was never really able to at any point as a lawyer afford to hire myself or one of my peers. And that, to me, seemed like a real distorted marketplace.

One of the things that I found frustrating about the access to justice conversation was a lot of it was about how to simplify the law. I don't know if that's the right goal, because life is complicated. You know, in the U.S. after the financial crisis, the big pieces of financial regulation were thousands and thousands of pages long, right? Why? Because financial services are complicated. The new financial products are complicated, and when you're trying to regulate in an area, you need to match the area's complexity. You need to have law that's sufficiently responsive to that area. If you were tomorrow to say “we need to build a new set of laws for the digital marketplace” or cryptocurrencies or even artificial intelligence, you would never get away with having highly simple laws. They would have to be complex.

And so it seemed to me like the social world is evolving in a way where the complexity of law was basically going to be a given. For me, it's about, hey, AI has something to offer here. Why? Because it can synthesize more information than we can. It can translate complexity in a way that's legible for us. And so we don't have to do the Faustian bargain of: let's have simple laws that don't actually achieve the ends that we're hoping for them to achieve. We can still have some complexity in our legal system, but that complexity can be made more accessible and more legible to us through technology. For me, preserving the law’s complexity, but at once making it more accessible is the true promise of AI.
 

Part of what you're talking about is, being able to summarize complex information, analyze complex information, really pull insights out of stuff that is, maybe too long, lengthy, complex for a non-legal professional like me to sift through. And that sounds really, really important. And I wonder whether it is just as important that we can now get to those end results by just being able to ask simple questions about a legal document or about the law. What are the implications of that?

I think what we're trending towards is a world where people don't need to have specialized knowledge to understand their own legal problems. And that's really what the promise of AI is. When we talk about UX and UI design for websites or for a piece of software, we don't talk about making the software any less effective or less complex or less capable. We just talk about making sure they interface with people in a way that works for them. And I think the legal system has to face that same moment where we have to say, “Actually, we need a bit of UX UI here. We need a little bit of user friendliness. We need to have some design principles.” Because right now, people are not being well served.
 

In addition to teaching at the University of Toronto, you mentioned that you also worked for the legal tech company Blue J as their vice president of legal innovation. You're still a consultant for them, correct?

That's right. From time to time. Yeah.
 

What is Blue J? And what did you do there, and what do you still do there?

So Blue J is a legal tech company that uses AI to improve legal research, and it does so in a couple of ways. So one sort of early product that we built was a machine-learning enabled case predictor where you would describe the facts of your legal situation in tax or employment law to the software, and it would, in a matter of seconds, synthesize all the historical case law in the area and give you a prediction.

Most recently, in part because of the growing sophistication of large language models, we built a tool called Ask BlueJ. Ask BlueJ is really kind of the next step of the large language model infrastructure. We've used the kind of language generation capacity of these large language tools, but restricted the information they can pull from to an already sort of vetted, scored universe of legal material. And so, imagine being able to generate language like a ChatGPT, but it only pulling information from, US federal tax law. You actually have a tool where you ask it tax questions and it is disciplined enough to only give you answers based on legal information we know to be true.
 

I know that you and Benjamin Alarie, who is one of the co founders of Blue J, you both also published a book in July called The Legal Singularity: How Artificial Intelligence Can Make Law Radically Better. I'm wondering if you can tell me a little bit about what that term legal singularity means.

Oh, yeah. So the legal singularity is a description of a future state where the law is going to be knowable everywhere on-demand, right? Where people have a real time sense of their legal rights and obligations. And our argument is that the advent of AI, and the application of AI to the law, is making that much more possible—where the law is going to be functionally complete.

So to understand this, you have to understand that the law is in many ways uncertain right now. Even me as a lawyer and a law professor, I don't have a perfect sense of what a court is likely to do in a given moment, nor do I always have a sense of exactly where the law is. Because, as we know, sometimes the law is elusive, right? Sometimes, if you look at a court case, very often the dispute is about what the line is, right? A lot of people are shocked to learn when they read a court decision that there's not always—especially in an appellate court decision—a robust discussion of what happened in the case. But there's always a discussion of what the law is, and there's always a competing view on what the law is on this matter. And so if we can get to a place where the law is kind of understood, is predictable, then think about what that might mean for lawyers who can actually do the second and third order thinking, the judgment, the work of actually helping deliver their client the justice that they're hoping for—as opposed to using all their time and energy to try to find what the law is.

Think about what that might mean for judges who, rather than having to adjudicate as between competing views of the law, they can actually sort of stipulate what the law is, and then think hard about what the appropriate sanction is, or the appropriate punishment, or what should really happen in this case. And then think about for all of us as a community. If we know what the law is, we can have the conversation of what the law ought to be and should be. And so, the legal singularity is this future state where our knowledge of the law converges around this sort of predictable fulcrum. And I think that that's a fantastic promise. 

Now, at the same time, one of the themes of our book is that this has to be the result of a deliberate and concerted effort, right? Because the risk of inequality is significant from AI—not just because of inequality that could be manufactured by technology, but also because we could project existing inequalities into the future if we just use prediction without any meaningful safeguards. Also because we run the risk of not planning the use of AI effectively in our society so that we could maybe not be as coordinated as we need to be. We could maybe be using AI in places where we actually ought to have the ordinary, analog kind of interaction. And so for us, it's about calling everyone to the table for the thoughtful conversation about how to land the plane, so to speak, because there are real risks.
 

That's a good segue into another question I wanted to ask, which is: what's a common misconception that people have today about how people in the legal profession are already using AI tools?

We've been talking today about some of the more sophisticated applications of AI in the law—legal prediction, using tools to fortify your knowledge. Still, most of the tools that are being developed—AI tools included for the legal profession—are things about automatic administrivia. Better email filtering, more effective document storage, tools that help you with docketing and tracking your time, tools that help you get through large volumes of documents, et cetera.

And so it still remains the case that a lot of the tools that are being built and the tools that are being embraced are the kinds of things where they'll probably be invisible in the near future because we'll use them as efficiently and effectively as we use, Finder on iOS or, you know, Microsoft Office. Those are also the kinds of tools that don't generate the ethical and policy questions that these others do, which really go to the heart of legal knowledge and legal information. And so the major misconception is most of the applications of AI in the law remain these kind of assistive technologies that don't compete directly with lawyers and don't implicate lawyers’ self-image. 
 

I want to zero in on that word, assistive, because you've said that these tools aren't going to replace lawyers, but rather augment the skills and abilities that they already have. Can you elaborate on why that's an important distinction?

Great. Let me start by saying if we all decided to snap our fingers and use all of the AI possible and become automated lawyers, there really wouldn't be enough tools for us to use today, nor would there be enough good tools for us to provide the quality of service which we hope to provide. So it's still very much day zero. I'm glad we're talking about this. It's still early, right? And so that's one of the reasons that we can't think of these as turnkey, sort of wholesale replacements of what lawyers do, because there's not enough coverage. That's one reason. 

The other reason is that the things that technology is good at—the computing effectively, the ability to sift through large volumes of data at a rate much faster than human beings—that is only a component of lawyering, and because we have this professional license and these duties to the public and clients, it's also not enough to just pass that information along. It's probably professionally negligent to do so. There are expectations of us, which mean that we have to be the sort of primary interface with our clients—that we are the ones that hold duties that we're the ones that are ultimately responsible. We are the ones that have to reconcile whatever tech gives us with our experience and our knowledge and our judgment. We're professionally obligated to do so. So the tools can't be anything more than assistive right now. 

But the thing that is most important to remember is that client problems present in a way that is complicated. We have to translate for the technology often, and then we have to translate it back.

And so until technology can play that role, which I'm not very confident it will ever be able to, we have to work alongside it.

I think actually thinking about a division of labor between human and technology not only helps us deliver better services, but it also helps us navigate the ethical issues and the technological challenges. Let me give you an example. So, I bought something on Amazon recently and I bought it using the “buy now” option, or one-click purchase. And the reason that we have the one-click purchase is because we've decided that three clicks are too many, right? Our expectation of technology is that it deliver exactly what we want, right away, with the minimum amount of effort. And the kinds of AI tools that are sophisticated, they're working with more information. They don't depend on a simple mechanical command. If a tool is trying to synthesize all of the historical case law on an area and respond to you in the King's English, then you're going to have to accept the margin of error that we're not accustomed to accepting from our technology.

You know, if my computer buffers for too long, I might throw it against the wall. Like, that's my relationship to tech. And what I want lawyers to think about is a new relationship to tech, which is much more like a professional relationship. Think about a tool like GPT, maybe, as the really eager intern or summer associate—the person who's eager to please, who has the skills and ability, who depends on you for direction, who's tireless, who maybe stays up all night, but maybe they don't know anything yet. And so your job is to interact with them the same way you would that person in your office. Number one, you'd recognize that you'd be responsible for directing them. Number two, you're responsible for re-directing them; you will follow up, ask additional questions. Third, you won't trust everything they give you automatically. You're not just passing on the memo from the intern to the client. You're double-checking the factual assertions. You're making sure that it meets your approval and your own standards of quality. 

And so if we start to think about a negotiated relationship in that way, I think some of the apprehension of all the tech tools goes away because we can think about it in this familiar template to us. You know, everyone kind of knows what it's like to work with the really eager person who's brand new. And so why not think about the tech that way? I think it's a liberating kind of framework.
 

So what you're talking about is where we are now and where we're probably going to be in the next little while. What does AI in the legal profession look like five years from now, or even 10 years from now. Can we predict that far out?

I think it would be difficult to see any radical changes in the next five years—partially because there's still institutions at play which themselves are slow to change. So legal professional regulators, for example, are unlikely in the next five years to say, “Hey, legal tech developers can build legal advice tools for the public.” Right now we have rules around the unauthorized practice of law, which would make that impossible. So until those things go away, which I don't imagine them going away in the next five years, you're probably not going to see this radical shift in the world where suddenly everyone gets legal advice from, effectively, the robots, right?

The other thing is that you have to allow for some false starts. The first wave of legal AI tools are probably not as effective as the second and third wave will be. And so maybe we won't fault some lawyers who are unwilling to embrace those first set of tools because of their imperfections, right?

Especially if you already have a business that's thriving, the risk of reconfiguring it in the face of speculative tools is pretty high. And so the stuff has to get better, and I think it's going to get better over the course of more than just five years. I think there'll be a point at which the tools are completely undeniable—I don't know that they're there yet—and when that comes I think there'll be more widespread embrace. 

But I think what's going to happen in the next five years is the existential angst is going to make way for, I think, a new consensus around what lawyers have to offer. So if you imagine a law school a hundred years ago versus now, the only difference would be you'd see laptops on the desk today. If you imagine a courtroom 100 years ago, maybe today you'd see some screens for evidence presentation, but for the most part we're doing things the same way we've always been doing them. Because the examples that we're seeing in sister professions like medicine and accounting and whatnot—because of the economic case for AI being so compelling that the market is not going to avoid it for too long—I think we're all going to have to alter our expectations and think through what is it that we're here to do? And if we can have that conversation in the next five years we can avoid the trap that we fell into in the last five years, which is that we did none of the preparatory work and we're totally caught unawares by this new technology, and then it led us to spiral. 

I think that we have an opportunity right now while it is still day zero to be proactive about how, as a profession, we're going to confront the challenges of new technologies. And I think that I'm seeing the seeds of that conversation right now in a way that I think is hopeful. Because the truth of the matter is, the stuff's not going anywhere.
 

Abdi, thank you so much for joining me.

Hey, thanks for having me. Always a pleasure.
 

~ ~ ~
 

Look, I won't lie. I'm still thinking about the banker boxes. I think it’s because, as a writer, I can relate. I'm often drowning in documents too! Articles, reports, interviews, books—there’s so much to read. And sometimes all I'm looking for is that one, elusive detail, or that perfect, missing quote.

So, yeah—I get where Abdi's coming from. 

If we have this technology that can take the drudgery out of work—that can help us find things more quickly, find the right things more quickly—how else could we use that time instead? For me, I want to write things that people are going to like and enjoy and hopefully remember. If AI could make that process a little less painful—that would be nice. But only for me. 

What I like about Abdi’s case for AI in the legal profession is that it’s not just about helping lawyers. It's about making justice more accessible, affordable, accountable. For Abdi, it’s an opportunity to improve people's lives.

If you want to hear more from Abdi, you can find a link to another conversation we had last year in the show notes. The Legal Singularity: How Artificial Intelligence Can Make Law Radically Better, which Abdi co-authored with Benjamin Alarie, is out now from University of Toronto Press. We’ll drop a link to that, too

Working Smarter is brought to you by Dropbox. We make AI-powered tools that help knowledge workers get things done, no matter how or where they work. 

You can listen to more episodes on Apple Podcasts, YouTube Music, Spotify, or wherever you get your podcasts. And you can also find more interviews on our website, workingsmarter.ai

This show would not be possible without the talented team at Cosmic Standard: Our producers Samiah Adams and Aja Simpson, our technical director Jacob Winik, and our executive producer Eliza Smith.

At Dropbox, special thanks to Benjy Baptiste for production assistance and our illustrators, Fanny Luor and Justin Tran.

Our theme song was created by Doug Stuart. 

And I’m your host, Matthew Braga. Thanks for listening.
 

~ ~ ~
 

This transcript has been lightly edited for clarity.