Free AI Lecture for Law Students Focuses on Risks—and Benefits—of AI

06.04.2025

By Ed Finkel

Faculty

Daniel Linna Jr., senior lecturer and director of law and technology initiatives at Northwestern Pritzker School of Law, often attended the continuing legal education programs of the Practising Law Institute during his years as a practicing attorney, and he’s since provided several of them. When his contact there mentioned potentially creating a free program for summer associates on the risks and benefits of using AI in law practice, Linna jumped at the opportunity.

“You should do that. It’s sorely needed. And I want to do it for you,” he volunteered, adding that time was of the essence “so that summer associates in law firms and interns in courts and nonprofits can watch this before they start their internships this summer.”

Linna came to this urgent perspective after stints as an academic for the past decade, an equity partner in a large law firm before that—and as a software developer, consultant and IT manager prior to entering the legal world.

“I’ve been a part of delivering sophisticated legal services, and I know enough about the technology and how it can all come together,” he says. “I’ve been teaching classes on this for 10 years. We’re doing research in this space about the capabilities and effectiveness, and the professional obligations that lawyers have when they use AI and other technologies.”

In addition to being increasingly capable, AI tools are being used widely by not only practicing lawyers in all types of firms but also legal aid organizations to improve access to justice, and  judges and others in the court system, Linna says. Yet there’s a lack of understanding about their capabilities, particularly those developed specifically for the legal domain.

“There’s a lot of worry about the potential risks and not nearly enough focus on the potential benefits,” he says. “We need more people who are thinking about, ‘What does justice look like in 10 years? What does the rule of law look like in 10 years, 20 years, 50 years?’ And lawyers are too much asking questions about, ‘How is it going to affect us?’, vs. saying, ‘How do we take and harness these tools, to be proactive, so we can create what we want the rule of law, and justice systems, and the legal profession to look like far into the future?’”

The program Linna recorded in April gave participants the knowledge to: explain how large language models and generative AI work, be able to identify types of tools they’re using, understand scenarios they might encounter during their internship, leverage benefits and mitigate risks of AI, appreciate related professional responsibility obligations and be ready to ask necessary questions before using an AI tool.

These concepts are “important for legal services delivery because it can help us improve access for everyone, in many dimensions,” Linna says. “Students going into organizations where, in some cases, they’re going to be expected to use AI tools. In other organizations, at the other end of the spectrum, they may not give very much guidance on whether [summer interns] should or shouldn’t.”

While it’s important to know the risks, Linna notes that the ability to use AI tools to produce higher quality work will accelerate students’ careers, and he says that’s just as important as knowing how not to use AI. Linna brings such tools into his classroom, for example, by creating conversational AI tutors.

“I’ve seen a dramatic improvement in the students’ writing by using tutors at different stages: as they come up with the topic, as they create outlines for their work, as they create a first draft of their work,” he says. “It’s not merely putting in a prompt, and copy-and-pasting. That’s not the proper way to use these tools. But by using it to iterate on your work, for brainstorming, as a true partner in co-creating work, I’ve seen it dramatically improve the work product that my students are creating. And that’s just one example of the way AI is being used.”

In creating his program for PLI, Linna worked to ensure that listeners gained a functional understanding. “You don’t have to have the level of knowledge of a PhD machine learning engineer, but it can’t be ‘magic’ to you,” he says. “If you just think it’s magic, you don’t understand enough about the way these tools work to use them responsibly and well.” If someone tells you that a tool uses AI, he adds, you should be prepared to ask questions like: “What do you mean? Is it generative AI? Which generative AI tool? How is it developed? Is it a machine learning tool? Where did the data come from? Is it a rules-based system? Who has access to information that we input?”

Linna has heard lawyers push back on the idea that they need to understand AI systems at that level, but he insists that it’s essential to using them robustly. “You really have to interrogate the system, and you have to ask the right questions,” he says. “And if you don’t have a functional understanding of artificial intelligence—what it is, how the systems are designed, and created, and validated—they you can’t do that. You can’t do what you’re supposed to do. You can’t fulfill your ethical obligations as an attorney and manage the risks; and you’re going to be deficient in your ability to use these tools to improve work product and properly serve clients.”

In creating the PLI webinar, Linna hopes he provided law students baseline knowledge to be part of this conversation. “We should be proactive and push this forward, vs. kind of just being passengers on this AI rocket ship,” he says.