Northwestern researchers examine the potential harms to democracy
Will the lure of deepfakes prove irresistible to democratic governments? What questions should governments ask — and who in government should be asking them — when a deepfake is being considered?
Two Northwestern University professors coauthored a new report examining several hypothetical scenarios in which democratic governments might consider using deepfakes to advance their foreign policy objectives and the potential harms this use might pose to democracy.
A digitally altered video, photo or audio recording, deepfakes are typically used maliciously to spread disinformation and create confusion. A well-known example includes a fake video that surfaced in March 2022, in which a digitally altered version of Ukrainian president Volodymyr Zelensky tells his soldiers to lay down their arms.
“As AI has improved, deepfakes have gone from primitive to highly realistic, and they will only get harder to distinguish,” the authors write in the report. “This proliferation of AI provides an unparalleled opportunity for state actors to use deepfakes for national security purposes.”
The researchers posit that the lure of deepfakes will eventually become irresistible to democratic governments. “It will not be long before major democracies, including the United States, start or at least consider using deepfakes to achieve their ends, if they have not already done so,” they said.
According to the authors officials should consider several factors when considering the use of deepfakes:
the likely efficacy of the deepfake,
its audience,
the potential harms,
the legal implications,
the nature of the target,
the goal of the deepfake, and
the traceability of the deepfake back to the originating democratic government.
In general, the authors argue that deepfakes should not be used as they are likely to reduce the credibility of democratic governments. There may be rare circumstances, however, when the use of deepfakes deserves serious consideration. In these cases, the authors say, governments should develop a process for approving or rejecting deepfakes that ensures a wide variety of perspectives are brought to the table.
On February 1, Northwestern Pritzker Law, in collaboration with the School of Communication, held “Generative AI+Entertainment: Opportunity, Ethics, and Law,” a collaborative conference that ...
The Task Force’s 20 judges, practitioners, and academics are charged with gathering knowledge and recommending how the Illinois Judicial Branch should regulate and use AI in the future.
To celebrate the 10th year of the Master of Science in Law (MSL) program—where STEM, law, and business converge—Northwestern Pritzker Law is highlighting alumni of the program from each ...