In January, the Illinois Supreme Court selected Daniel W. Linna Jr., Senior Lecturer and the Director of Law and Technology Initiatives, to serve on its Artificial Intelligence (AI) Task Force as part of its strategic agenda, which provides a blueprint for the next three years.
The Task Force’s 20 judges, practitioners, and academics are charged with gathering knowledge and recommending how the Illinois Judicial Branch should regulate and use AI in the future. They are expected to produce a piece to educate court leaders about AI/generative AI; propose AI policy, guidelines, and court rules for the Illinois Judicial Branch; and recommend next steps for implementation and opportunities for AI.
Recently, Linna has been working with McHenry County Chief Judge Michael Chmiel to assemble a program on AI for law for judges, which he and Judge Chmiel will deliver during the Illinois Supreme Court Judicial College. Linna is also a member of the State Bar of Texas task force on AI and the law.
“The supreme courts play an incredibly important role in each of these states in driving innovation in the courts, thinking about how to use technology,” he said. “I’m excited these states created task forces and that I’m a part of them.”
Below is an edited and condensed discussion with Linna on AI and the law in Illinois courts and at the Law School.
What is the landscape of AI and the law in Illinois like compared to other states?
Across the United States, there’s a huge access to legal services problem. Something like 80 to 90% of people who have a legal problem don’t get counsel. They can’t afford a lawyer, or sometimes they can’t find a lawyer.
One thing that has been happening, and I know there’s some interest among some judges in Illinois that I’ve spoken to, is the idea of online dispute resolution. In the traditional model, we have this lengthy process, and then a judge or a jury makes a decision in a case. Parties almost always settle at some point, but getting there is a long and expensive process.
Online dispute resolution creates a process where you use technology to help the parties understand their obligations, rights, the applicable law, and what the decision is likely to be if the case is adjudicated. The ODR process educates the litigants and guides them to thinking, “This is probably what a judge would do.” Ideally, the parties agree to a settlement through a process that is far less costly and leaves the parties more satisfied.
These are not just problems in the United States; access to legal services is a problem around the world.
China, Brazil, India, and many other countries have been experimenting with technology in their courts. In our Northwestern CS+Law Innovation Lab, we’ve worked with the Dominican Republic Supreme Court.
There are plenty of opportunities to step back and ask, “What can we learn from what other jurisdictions are doing?”
How can you make recommendations and educational pieces that will be sustainable or elastic and can be modified as things change? Who knows where we’ll be in two, five, or ten years?
Everywhere we see AI being used, we need a functional understanding of AI. What do these tools do? What are the benefits? What are the risks of using them? Using generative AI in education is a great example: “Do we want to say that students can’t use it? Or do we need to say, this is pervasive in society; these tools could help many people. Our students need to learn how to use these tools well and responsibly.”
There’s a lot of human work to be done on this for two reasons. First, to get any of these things to work, you can’t just release machine learning on the data and think it will figure it out. No, we have to give feedback to these systems. We have to tell it what “good” looks like, what’s accurate, what we care about, our values and goals, how we measure for bias, all those sorts of things.
Then, we need to think about technology and how we design systems. In academia particularly, there’s a lot of discourse around, “We’ll create this robot judge.” That’s not the way I see it developing. I see online dispute resolution. I see individuals with supercomputers in their pockets better understanding their rights and responsibilities.
We do have to be mindful of how judges will use these technologies. When lawyers use these tools to create briefs, some courts have banned them or said, “You have to disclose using these tools.” I think that’s the wrong direction to go because the people who are going to make mistakes using these tools are not the people who are going to read these new rules and be careful about complying with the rules.
Better yet, judges and courts can use technology to check the cases cited in briefs automatically. AI tools can help lawyers and self-represented people and improve access to justice for everyone. If we’re worried about certain risks, like AI system hallucinations that produce nonexistent cases, the solution is not to ban the AI tools; it is for judges and courts to use technology tools to identify the errors.
At the same time, if the judge uses a tool that says, “These are cases you should cite,” we should take care to design the system so that it doesn’t constrain the judge’s discretion. To do this, we need to understand functionally how and whether these tools work and what they do and ask how we design tools that empower the people who are using them. Not just the judges and lawyers but also individuals: individuals in court proceedings and individuals across society who need self-help tools.
How do you see Northwestern Pritzker Law situated in terms of covering and educating students regarding AI and the law?
There’s been a history of interdisciplinarity at Northwestern Pritzker Law and collaboration across many schools, including the business and engineering schools. Many of our faculty have PhDs. The Law and Technology Initiative is focused on bringing together computer scientists and law researchers and creating interdisciplinary classes like our Innovation Lab, where we have both computer science students and law students working in teams together. We’re working with technologists building prototype technologies and developing the skills to analyze these systems and figure out how we can and should use them in law for legal reasoning and legal tasks. I have colleagues in our legal writing program, like Michelle Falkoff, who use generative AI. They’re thinking about how AI should cause us to change our pedagogy, ensuring students understand the tools, benefits, and risks. My colleague Wendy Muchman introduces AI to her students to help them understand the professional responsibility and ethical considerations that we must address.
Some of my colleagues teaching contract drafting classes have been using these tools and introducing them to students. There are a good number of individual faculty members who are thinking about and incorporating these tools into their classes. We know our students will be asked to use these tools when working with judges or law firms. We already see it happening.
We have to prepare our students for that world. We have to think about how to make sure we use these tools ethically and responsibly and also use them well. How do we make sure law firms think of Northwestern Pritzker School of Law first when they’re looking at the marketplace and say, “We need students who understand these technologies”?
How will tools like AI affect the legal marketplace?
I gave testimony to the Illinois Legislature a couple of months ago, and one of the things I talked about was the unauthorized practice of law rules. We cannot just stand by and watch to see how this all develops. We have agency and we need to be proactive.
AI is going to affect the legal market. Individuals will be empowered, and they might not need lawyers once they are empowered, at least not the same way as in the past. The bottom line is that 90% of people aren’t getting lawyers right now. Of course, there will still be things for lawyers to do to add value to the world, but that looks like is going to change. It’s unsettling, sure. Therefore, there’s been a bit of resistance from some practicing lawyers and many state Bar organizations to some of these changes. Sometimes, you want to say, “Gee, some of these lawyers are protectionist Luddites.” But I try to remind myself, “How do you lead people? How do you empathize and understand where they’re coming from? How can you foster change? How do you get them to see the benefit for society and also the opportunities for the legal profession to deliver on our goals and values?”
If you’ve practiced for 20 or 40 years, or even if you finished law school, then the idea that these technologies are coming can feel threatening. How do I help people understand that there are tremendous opportunities here? The rule of law, access to law, justice, and these concepts we stand for are about serving people in society. I think if we put that first, many of these other things will take care of themselves. On this task force, we’ll need to have candid, tough discussions if we want to make progress solving these challenges.