Northwestern Pritzker Law Co-Hosts “Generative AI + Entertainment: Opportunity, Ethics, and Law”


Law, Business, Tech AI Alumni Events Faculty

On February 1, Northwestern Pritzker School of Law, in collaboration with the School of Communication, held “Generative AI+Entertainment: Opportunity, Ethics, and Law,” a collaborative conference that explored AI’s place, potential, and limitations in the creative industries through the faculty expertise of two Northwestern schools and leading alumni in the media and entertainment field. This conference was part of the Law School’s new West Coast Initiative, which builds on the school’s collaboration with Kellogg to create the San Francisco Immersion Program, allowing students to learn a mix of law, business, and technology.

Dean Hari Osofsky welcomed everyone online and in person at the IMAX Playa Vista in Los Angeles, saying, “At this time of change and challenge, our alumni and faculty at both schools are providing important leadership, research, and teaching, and helping to prepare students to lead through their ability to critically engage with these emerging technologies.” She mentioned that in the spring of 2023, the Law School was one of the few in the country that brought ChatGPT into legal writing and ethics classes. “We’re continuing to build and look forward to working with all of you to build upon our many collaborative research and teaching partnerships, including the West Coast Initiative, Law and Technology Initiative, and MSL program, which is celebrating its 10th year this year.”

The day’s first panel explored how generative AI is changing entertainment, technology, and the law. Emerson Tiller, J. Landis Martin Professor of Law and Business and director of the West Coast Initiative, moderated a discussion between the Law School’s Professor Paul Gowder and Professor Daniel Rodriguez, the School of Communication’s Professor Nick Diakopoulos and Professor Aymar Jean Christian, and Kellogg School of Management’s Professor Birju Shah.

Tiller started by asking how the panelists saw AI changing the conversation in their fields. Gowder, a self-described “rule-of-law guy primarily,” discussed generative AI regarding power control, a concern he believes is not discussed enough. “When we create automation, we also create the capacity for centralization,” he said, raising concerns about how AI can create unchecked power in the military or media. “Do publishers, which have the capacity now to generate more content without the consent of writers and editors, thereby have more centralized control over the contents of major media? And if so, what can the creative and intellectual industries do in order to retain control over that content? These are the questions from a control of power standpoint that automation technologies like gen AI supply to us.”

Rodriguez offered a regulatory perspective, citing the Collingridge dilemma. In this methodological quandary, efforts to control the further development of technology face conflicting issues with information and power. “We can’t afford to wait until the technology is more mature and well underway. We need to have the kinds of guardrails that maintain and manage its use,” he urged. Professor Rodriguez noted that existing law paradigms are ill-suited to deal with generative AI. He emphasized the need to pay special attention to private ordering, standard setting, and the role of markets. He also stressed the imperative of engaging in cross-border AI negotiations and solutions (particularly with Europe and China). Finally, he added, “I think a hold on public alarmism is a key part of the regulatory strategy.”

Later in the day, Deans Osofsky and Johnson held an industry spotlight with Stephanie Burns (C ’91), General Counsel Sony Interactive, and Che Chang (JD ’08), General Counsel at OpenAI. Chang discussed his professional background in tech, working at a start-up in Silicon Valley before he attended Northwestern Pritzker Law. After graduation, he worked at firms focused on tech start-ups before he arrived at Amazon Web Services and eventually became the lead lawyer for their artificial intelligence business. In 2021, he joined OpenAI and became General Counsel in 2023.

When asked what law schools should do to prepare future lawyers and collaborate with the legal profession on these changes, Chang compared the time when legal research had to be conducted by going to the library and using textbooks before databases like Lexis and Westlaw came onto the scene. He said while AI technologies may require less digging from today’s students, “You’re still responsible for taking that and doing something with it. It doesn’t change the fact that you need to still understand the issues. You need to be able to communicate them very clearly to people. You need to be able to make decisions on them, you need to be able to lead people and lead teams over time. And those are the things that are going to be the most valuable skills that all the students keep building throughout their careers.”

After the panel discussed national and international regulatory issues and debates around copyrighted material being used to train AI, Dean Osofsky asked what opportunities Chang saw for collaboration between academia and industry and advancing AI-related legal education and practices. He replied, “Academia brings a bunch of extremely smart people who spend a lot of time thinking about what’s happened in the past, what’s happening right now, what might happen in the future, to think about frameworks and processes and governance models which are all going to be critical for how we resolve some of these [complex AI] issues.” He said current legal generative AI issues that involve copyrights, labor, and capital could be compared to a long history of case law, “similar analogous situations both from YouTube and Viacom or when the car first came out and the horse and buggies [were] being replaced. Those are all things that a deep collaboration between academia, industry, as well as government and civil society, will make much more effective and relevant.”

Chang closed by saying he was optimistic about the future and the capabilities systems held, citing examples like a small business owner using ChatGPT to help him write emails in English, people with aphasia using the DALL·E 2 image drawing model, and a lonely 100-year-old man engaging with ChatGPT (affectionately naming it Chatty). “Really smart minds augmented with useful technology can potentially make breakthroughs,” Chang said. “Those are the things that are hopefully going to come in the next years [as] the promise of AI and why there’s so much interest and desire.”