The Human Element in AI: Redefining Legal Roles, Scaling in Enterprises, and Leading in 2025

Jake Jones and Richard Mabey delve into the future of legal work, balancing AI automation with human nuance, overcoming enterprise adoption challenges, and shaping the next generation of in-house legal teams.

The Human Element in AI: Redefining Legal Roles, Scaling in Enterprises, and Leading in 2025
Photo by Melinda Gimpel / Unsplash

In this third part of the series, Jake Jones (Co-Founder at Flank) and Richard Mabey (CEO at Juro) confront what the automation of legal work means for us as humans. They discuss the changing role of senior lawyers, protecting the human quality of legal services, the challenges of scaling into large enterprises, and how Jake would lead an in-house legal team in 2025. 

Part Three: Changing Roles – Human Qualities – Large Enterprises – Free Advice

The Changing Role Of Senior Lawyers

Richard Mabey: That human side is so interesting. When I was a trainee lawyer, we’d do tasks like creating the first draft of a contract, or reviewing a stack of contracts and defining key clauses. Then your supervisor would basically “grade your homework” and find points you weren’t able to spot. If agents are going to be increasingly able to conduct those tasks in a human-like manner, what does that mean for the role of the senior lawyers? How will it change what it means to be a lawyer? 

Jake Jones: It’s a really interesting emergent question; we don’t know. My prediction is that the more senior members of the team will be much more involved in strategically defining the governance guardrails. A legal agent needs to know how to act in different situations, so who’s defining that? Obviously, the senior lawyers are. At the moment this is very novel to them. They’ve never had to set guardrails for an agent before, so they might just upload a set of SOPs or previous contracts. 

But legal teams will become better and better at leveraging agents to learn about things like where they’re getting pushback on their terms, how they should therefore change their SOPs, and where they should refuse to negotiate entirely, because it’s not worth their time and effort. They’ll be leveraging the data generated by the agent to strategically guide what the agent should do. Essentially, they’ll be establishing the blueprint for the team, which in a sense they’re already doing, but it’s been people, including themselves, executing on that, not agents. 

Richard Mabey: It’s like having your own team of agents. And this idea of leverage is interesting. As a lawyer, you’re conducting individual tasks. As a manager of lawyers, your’re delegating tasks. Maybe in five years, you could have a small legal team with massive leverage, running multiple agents, overseeing playbooks, and having a high level of responsibility, but without the headcount. 

And we’ve both seen, for the legal teams we work with, how hard it is to get new people in 2024. 

But so far we’ve been talking all about the positives. What worries you about the vast automation of legal work? 

Protecting The Human Quality Of Legal Services

Jake Jones: Earlier today, we were hosting an event for general counsels and legal ops professionals. Someone asked what people enjoy most about the job. There was a general consensus that the human element in communicating with other teams across the organization was one of the most important and enjoyable aspects of the job. I think protecting the human quality of legal services is important. 

There’s also something unquantifiable about, for example, a negotiation process. You’re trying to find a way of getting both your needs met, and there’s something qualitative and instinctual in that. It’s not just that both parties need to make sure they don’t go over a certain limit; maybe that’s the case with smaller, less important deals, but there’s a human nuance that, certainly today, can’t be captured by agents and certainly not by logic. 

We’ve actually worked a lot with logic trees in the past. The birth of our company was logic trees and decision trees, and even though we were coming from Germany, where it’s a very logical, more deterministic system, they couldn’t capture that human nuance. This is what worries me about trying to automate away something like negotiation. 

Then there’s the relationship to other teams. This technology should be improving relations between sales and legal, for example, not pushing them apart. This is why I’ve referred to some self-serve tools as being passive aggressive, because they’re saying: Do it yourself. And I don’t love that. 

The Challenges Of Scaling Into Large Enterprises

Richard Mabey: Let’s talk a bit about adoption. You’ve had amazing early adopters. What adoption challenges do you foresee for Flank as you scale into large enterprises? 

Jake Jones: Already, maintenance of the technology is a major adoption challenge, one that needs to be owned by the product builders. If someone is deploying a tool that’s based on a certain playbook, what happens when that playbook changes? Are we, by introducing Flank to an organization, giving the team more work? Do they now have to maintain a playbook and other documentation, when they’re already maintaining in Vanta and Notion? We obviously cannot push that to the customer. We have to find clever, innovative ways of automatically maintaining leveraging AI. How can the agent learn as it goes? 

But large enterprises are a very different game. We closed our first two enterprise customers within the last few months, and we’ve been onboarding them. I’m trying to think of what the greatest challenges are. With AI in particular, any organization you go into, especially technology organizations, are going to have a wealth of AI tools and/or pilots and their own internal projects underway already. The first thing you hear when trying to sell AI to an enterprise is: Thanks, but no thanks; we’ve got this covered, we’re building our own stuff. Not-built-here syndrome is a real thing. 

Trying to tell enterprises, with incredibly smart engineering teams of their own, that you know better is a challenge, especially when you see them advertising the amazing efficiency gains they’ve made by building their own tools. The key to overcoming these adoption challenges, which are also sales challenges, is continually convincing teams of what they can do with your software. 

The approach we’ve taken is to ensure that we’re solving novel problems for them, not problems they’re already in the process of solving, have got sorted, or have never really thought about. Rather, we do the discovery. We understand the problem they need to solve, because wether it’s a smaller company or an enterprise, if there’s a real problem that’s creating pain and impacting revenue, they’re going to have lots of patience for you to solve it with them, and they’re even going to be pretty pushy in adopting that software. 

Richard Mabey: Juro is a mid-market-focused tool, so we see this less frequently, but we do have similar experiences around the clarity of pain points. In mid-market companies you generally get it bottom-up, because the pain points are so obvious; the legal team is answering 500 simple, repetitive requests a months, or they’ve got 5000 contracts and don’t know what they say. Often you can get a critical understanding of the problem space from the first call; it’s like you can see the pain in the eyes of the prospect. 

Enterprises are obviously much more complex, and what we’ve seen is that they’re generally more top-down. The CEO says: Do AI. The next executive down says: I’ve got to do AI. The next person down goes: What AI shall we do? You get a solution in search of a problem, and sometimes it works. But as a vendor you need to spend a lot of time trying to uncover whether there truly is a problem that’s a perfect match with your solution, because maybe you’re closing the deal to hit someone’s internal AI target; a year later, if no one uses it, there’s no retention. 

Before we wrap up, tell us what challenges and technologies you would be looking at, if you were leading an in-house legal function in 2024 or 2025. 

Free Advice On How To Lead An In-House Legal Team In 2025

Jake Jones: I would want to find the problems that can be entirely outsourced to AI now. First I would consider what I can automate, which includes fairly basic stuff, like templating. In 2024 there should not be any kind of document that is not automated. I would also entirely automate the high-volume work that is taxing but not complex, like high-volume queries. 

What I would not do is try to use AI to help me tackle some of the more complex work, because AI is going to get progressively better over time. The approach I would take is to grade my pain points, the tickets that land on my desk on a daily basis, from lowest to highest complexity; and that’s my roadmap. I start with the lowest and see where AI gets me. As soon as I reach a point where the AI starts to falter a little bit and I lose confidence in it, I stop and wait for the next foundation model. Or I wait for the next Flank to come along and solve it in a way that I can trust. 

Then I would ensure that I fully understand what’s already possible with AI. You mentioned earlier that three out of 30 relatively forward-thinking general counsels and legal professionals were not aware of AI agents. We had a similar situation at a legal tech conference about six months ago, where one of our customers asked how many people in the room have adopted generative AI; it was three out of 100 people, three percent said they’re using it in their day-to-day work. 

And when we speak to the people that aren’t using it, more often than not they’re astonished at what’s possible, because they looked at something six months or a year ago and didn’t appreciate the astonishing rate of change. So I would try to exactly understand and stay on top of what’s already possible, which is no easy task. 

Richard Mabey: It’s such a refreshing message, because we often have customers and prospects who spend a lot of time looking into the foundational technology, but not necessarily using it. It’s like you want to make some toast and just read the instruction manual, how the toaster works, how many amps are going through it; but it’s pretty obvious how it works once you put the bread in and try it. 

You’re a chief product officer, and in the end we’re all product people in this world; we come back to the absolute fundamentals–what are the jobs we have to do day-to-day, which ones are repetitive and painful, and once we’re clear on that, look into what’s possible in the solution space and try it. Often, when you try something like an agent, you kind of know what it is, you don’t have to read a paper from Harvard or Stanford. 

Jake Jones: That’s very true. The more hands-on you get with this technology, the better representation of what AI can do will you get. In a way, the AI companies building on the application layer understand AI better than anyone, because they have to make it do stuff in the real world. 

Richard Mabey: Jake, thank you so much for being on Brief Encounters

Find out more about Flank's AI Agents here.

🎧 Listen to the full conversation on Episode 8 of Juro's Brief Encounters here.