Power assessments shape our perceptions of the limits of the possible, but quantitative rankings and dashboards can provide false confidence.
Nicholas Kitchen
{
"authors": [
"Tim Maurer"
],
"type": "other",
"centerAffiliationAll": "dc",
"centers": [
"Carnegie Endowment for International Peace",
"Carnegie China"
],
"collections": [
"Cyber and Digital Policy"
],
"englishNewsletterAll": "ctw",
"nonEnglishNewsletterAll": "",
"primaryCenter": "Carnegie Endowment for International Peace",
"programAffiliation": "TIA",
"programs": [
"Technology and International Affairs"
],
"projects": [],
"regions": [
"Iran"
],
"topics": [
"Security",
"Foreign Policy",
"Technology"
]
}Source: Getty
International humanitarian law applies only to international and non-international armed conflicts. Most offensive cyber operations to date have not taken place during an armed conflict.
Source: Cipher Brief
False flag operations have been routine ploys in espionage and warfare for centuries. Now they have turned up in cyber operations. The Cipher Brief spoke with Tim Maurer, co-director of the Cyber Policy Initiative at the Carnegie Endowment for International Peace, about the history of these subterfuges and how governments conduct them in cyberspace.
Tim Maurer: A false flag attack is when the attacker pretends to be somebody else. In the context of cyber operations, a false flag attack means that the attacker pretends to be another actor who actually exists, rather than simply creating a fake online identity to obfuscate the attacker’s real identity.
The distinction matters because, apart from hiding the attacker’s true identity, the victim might decide to use countermeasures or retaliate, in which case such reactions would target not the attacker himself, but whoever the attacker pretends to be.
There is a pretty robust norm in international humanitarian law against perfidy, which the International Committee of the Red Cross defines to include “the wearing of uniforms or the use of emblems of neutral states or other states not party to the conflict because uniforms or emblems of neutral states or of other states not part to the conflict.”
However, international humanitarian law applies only to international and non-international armed conflicts. Most offensive cyber operations to date have not taken place during an armed conflict. Related false flag operations therefore could be considered permissible ruses, in the absence of other indications against them.
The best historical examples and analogies are those from naval warfare, since ships actually carried flags that could be falsified, in addition to the soldier’s uniforms and other emblems. One example is when the British navy ship Baralong used the U.S. flag during World War I, when the U.S. was still at peace with Germany, and then fired on a German U-boat. Another example dating back to the 18th century is when the French ship Sybille pretended to be British as a ruse against a British ship.
TM: A false flag operation in the context of offensive cyber operations can consist of several methods. For example, a state actor could simply create a false online identity, say of a hacktivist group that pretends to be associated with the Islamic State, and then use the profile to issue a statement claiming credit for the attack, to create the appearance that the terrorist group rather than the state actor was behind the operation.
Another method is to make it look as if the malicious activity originates from whomever the attacker is trying to frame, or to use malware that’s been tied to another malicious actor as part of the offensive cyber operation. This can range from using malware developed by criminals and available on the underground market to using malware that’s been developed by another state. The state-developed malware may have been become public when it was caught and analyzed by security researchers, or a state could have obtained access to the malware during an intelligence operation against the other state. The increasing commoditization of the cybercrime underground market and the modularity of offensive cyber operations facilitate this method. It is also worth noting that sophisticated actors could use hackers skilled in other languages and keyboards or who operate only during certain times that correspond with whatever time zone the actor who’s to be blamed operates in.
TM: We don’t have a comprehensive picture to date to really assess how common false flag operations are, compared to non-false flag operations. In fact, in many cases we still lack certainty if a nation-state, proxy, or nonstate actor was responsible for the operation. We probably won’t know for years which operations were false flag operations, except for the ones that attracted particular attention and were subjected to the full investigative power of a major nation-state capable of combining signals and human intelligence to attribute such an incident and determine whether it was a false flag operation or not – unless, of course, such data gets leaked beforehand.
TM: Yes, states are constantly sending signals to each other. These can have various effects and can be intended to reach only one, a few, or many. Attribution capabilities are highly asymmetric at this point. Very few states have the ability to attribute malicious cyber activity with a high degree of confidence. It is therefore possible that a state might try to send a signal to another state, knowing the recipient will be capable of attributing the true source, while all or most other states will not notice. This tactic obviously means the effect of such operations would be limited to avoid attracting others’ attention or to require a significant response. That’s what makes false flag operations much more a tool of spycraft than warfare.
This interview was originally published in the Cipher Brief.
Tim Maurer
Former Senior Fellow, Technology and International Affairs Program
Dr. Tim Maurer was a senior fellow in Carnegie’s Technology and International Affairs program.
Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.
Power assessments shape our perceptions of the limits of the possible, but quantitative rankings and dashboards can provide false confidence.
Nicholas Kitchen
The debate over AI and work too often centers on displacement. Facing aging populations and shrinking workforces, East Asian policymakers view AI not as a threat, but as a cross-sectoral workforce strategy.
Darcie Draudt-Véjares, Sophie Zhuang
In its version of an AI middle power strategy, Seoul is pursuing alignment with the United States not as an endpoint but as a strategy to build industrial and geopolitical leverage. Whether this balance holds remains an open question.
Darcie Draudt-Véjares, Seungjoo Lee
This collection of essays by scholars from Carnegie India’s Technology and Society program traces the evolution of the AI summit series and examines India’s framing around the three sutras of people, planet, and progress. Scholars have catalogued and assessed the concrete deliverables that emerged and assessed what the precedent of a Global South country hosting means for the future of the multilateral conversation.
Nidhi Singh, Tejas Bharadwaj, Shruti Mittal, …
If Washington cannot adapt to the ongoing transformations of a multipolar world, its superiority will become a liability.
Amr Hamzawy