The Deepfake and National Security
|Topics:||International & National Security Law|
|Sponsors:||International & National Security Law Practice Group|
An amalgamation of the terms “deep learning” and “fake,” the term “deepfake” first entered our lexicon in 2017. A deepfake is a photograph or video depicting people saying or doing things that they did not actually say or do. According to Forbes, in early 2019 there were 7,964 deepfake videos online. Just nine months later, that number had nearly doubled, to 14,678.
While the majority of deepfakes can be categorized as humorous or pornographic (an estimated 96% of all deepfake videos comprise the latter category), there is growing concern that such videos might be used to influence global politics.
According to a recent report by The Brookings Institution, deepfakes can be used in a number of ways: “distorting democratic discourse; manipulating elections; eroding public trust in institutions; exacerbating social divisions; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials or candidates.”
There are now examples of deepfakes that were allegedly used for political purposes. In Gabon, President Ali Bongo’s political rivals claim that a video of Bongo intended to demonstrate his good health and mental competency was actually a deepfake. There is growing concern that the proliferation of such videos, and the rapidly advancing technology that makes them appear incredibly authentic, will be used by malevolent actors.
For its part, the U.S. Department of Defense acknowledges that deepfakes are a national security issue. According to Lt. Gen. Jack Shanahan, director of the Joint Artificial Intelligence Center, the Pentagon “saw strong indications of how this could play out in the 2016 election, and we have every expectation that—if left unchecked—it will happen to us again.”
For now, most legal and policy considerations are focused on detection. The two primary methods currently in use are Semantic Forensics (SemaFor) and Media Forensics (MediFor), both of which employ algorithm-based detection.
Regulation could prove to be a much thornier issue, given the constitutional implications of burdening free speech and artistic expression. But it is clear that deepfakes are here to stay, and their implications across the technology, legal, policy, and national security spectrums cannot be ignored or discounted.
For further reading, please see “Deep Fakes and National Security,” published by the Congressional Research Service, available at https://crsreports.congress.gov/product/pdf/IF/IF11333