Intelligence and the Disinformation Challenge

“The devil can quote Scripture to his own ends.” ( Engl. proverb, 1596)

Introduction

Disinformation campaigns organized by nation-states and non-state intelligence actors have a long tradition in the world of offensive Intelligence, as deception is a constant phenomenon inherent to all societies. However, with Intelligence going digital, digital disinformation campaigns have reached a new, unprecedented and dangerous political, social and economic dimension. Supported by cutting-edge AI-based cybertechnology, exploiting the most sophisticated strategies of psychological and information warfare, combined with methods of social engineering and virtual reality, and benefiting from the entire Internet’s ecological system, digital disinformation has evolved to become one of the most powerful and influential clandestine means of offensive intelligence of all times.

 

Targeting civil society, governments, national and international political, economic and corporate structures and single individuals, ubiquitous digital disinformation is pervasive and jeopardizing our democratic achievements – thus threatening to erode our democracy as a whole.

 

Due to the human predilection for exhibiting on social networks the most intimate thoughts, private and political opinions, individual intentions and sentiments, paralleled by the permanent and all-invasive on-line corporate advertisement and lobbying activities, digital disinformation, exploiting the weaknesses and opportunities of the entire information landscape, will most certainly continue to expand even more in the near future as a means and method of political and economic competition and information warfare.

 

In order to prevent governments, corporations and civil society from falling victim to the destructive effects of digital disinformation, there is no doubt that risk awareness and privacy protection will have to grow considerably in the near future, and efficient counter-measures must be developed and implemented to counter the disastrous effects of offensive digital disinformation.

 

We at Traversals, motivated particularly by the negative consequences resulting from the multiple Covid-19 disinformation campaigns we detected, are determined to assist our clients in raising their resilience to counter any digital disinformation or influence operation weaponized against them. Traversals will effectively contribute to our client’s risk awareness efforts by developing highly intelligent AI-supported IT-tools and operational methods to be used by analysts for

 

  • open-source information (OSINT) collection,
  • source identification,
  • content verification,
  • author attribution,
  • sentiment analysis,
  • fact-checking,
  • and the detection of false, also called ‘alternative’ narratives.

 

Our multilingual natural language processing (NLP) tools based on neural network technology, machine learning (ML) and big data resources, and our federated search against 100+ data sources, will enable us to provide for a reliable degree of recognition capacity. Because of speed playing a decisive role in countering digital disinformation attacks, Traversals will automate most of its digital disinformation detection and analysis processes, thus giving our clients a decisive edge on their competitors in an extremely challenging environment.

What Is Disinformation?

The lexical definition of the term ‘disinformation’ is pretty straightforward:

 

Disinformation is false or misleading information that is spread deliberately to deceive. It is a subset of misinformation. Misinformation is false or inaccurate information that is communicated regardless of an intention to deceive. Examples of misinformation are false rumors, insults, and pranks.” – according to Wikipedia.

 

However, what seems to be a well-known social phenomenon reflected by a simple and clear definition turns out to be an extremely complicated process, and sometimes even a dangerous psychological weapon if used in an offensive intelligence context. Meanwhile, the technique of digital disinformation has gained a virality never seen before. The continuous multimedial online spreading of fake news, lies, misinformation and disinformation, memes and other manipulated facts has reached an intensity to such an extent that experts already call our times the “era of disinformation”. The main targets of digital disinformation are politicians, journalists, bloggers, antiwar activists, prominent individuals as well as governments, international big-business players, industry, but also the civil society as a whole when addressed by government leaders spreading fake news to pursue their own political agenda. 

 

“Conspiracy theories and misinformation narratives that prey on individuals’ fears and uncertainties continue to mar the online landscape. Against the backdrop of a global pandemic, an economy in turmoil, and a tumultuous presidential campaign, there is ample terrain for malicious actors at home or malign foreign operatives and governments abroad to exploit such tensions by stoking chaos and dangerous, even life-threatening, untruths. Recent trends in the spread and reach of misleading or unfounded claims online have the unsettling potential to come to a head after Election Day – particularly if the certified outcome is unknown or contested in the ensuing days and weeks.” – (Statement by the US House Intelligence Committee announcement to Hold Virtual Open Hearing on Misinformation and Conspiracy Theories Online on Oct. 15, 2020)

 

At stake are simply the fundamentals and bedrock principles of our democratic countries: lawful individual freedoms, respect of constitutional and universal human rights and national and international law, and the right to freely choose one’s political leaders who are accountable to justice for their acts. 

The Psychology of Digital Disinformation

For a better understanding of the complexity of digital disinformation, we will first have a look at the basic theoretical structure of its psychological functionality and then at the value, it has for Intelligence.

 

Disinformation operations by intelligence services or their proxies traditionally belong to the technical category of Psychological Operations (PsyOps) or, in Russian terms, to the so-called ‘Active Measures’ (aktivnye meropriyatiya). Using the Internet-based social media world and the entire information ecosystem as platforms, digital disinformation operations are highly intricate clandestine processes. Active measures have a long tradition in the realm of Intelligence and were frequently and often successfully used during the cold war to influence foreign politics. Active Measures include deception and influence operations, covert actions, the use of ‘front organizations’, unwitting influence agents, false flag operations and operations discrediting political opponents or economic rivals, or a mixture of all of it. 

 

Meanwhile, in the cyberspace age, the use of digital disinformation is no longer limited to intelligence services. Facilitated by the ubiquity of smart communication devices and the anonymity of the Internet, digital disinformation is also used by all kinds of non-state actors, e.g. big corporates, NGOs, influencers, lobby and special-interest groups, religious fundamentalist associations, terrorist organizations, organized crime, political parties, and publishers subservient to all sorts of interests.

 

The main purpose of digital disinformation is to spread a mix of true, partially true or false data aimed at creating a fictitious deceptive new or ‘alternative reality’ for the witting or unwitting target to believe in, thus eventually inducing the target into acting exactly the way the disinformer anticipates. The target must be led to consider its reaction to the ‘alternative reality’ to be exactly the right response to the information received. That newly created ‘Reality’ may be considered by the target to be either the confirmation of an already presumed assumption, or the post-factual corroboration of an already conceived idea of how ‘Reality’ should look like. It may then trigger the conviction or even lust for believing that it is precisely that ‘Reality’ which is the precondition for an intended but not yet initiated action, giving it its proper justification. 

 

The more the ‘alternative reality’ seems plausible for the target, the better for the disinformer. Because once the target has mentally and sentimentally digested the received disinformation, it will begin to transform its belief into action. Sometimes that leads to a complete change of behavior, as can be observed with recruited terrorists frequently trapped by digital disinformation campaigns launched from thousands of miles away. At that stage, with the disinformer closely monitoring the development, every single step the target may take in the future can easily be anticipated by, and, if need be, subjected to, the disinformer’s control and manipulation.

 

However, what seems to be a successful psychological automatism is, at the same time, exposed to a number of risks of failure, blowbacks and potential drawbacks, thereby determining or influencing its value for Intelligence.

 

Digital disinformation operations are by nature basically atypical compared to classical secret offensive intelligence activities that normally are meant to stay secret. By contrast, disinformation operations, although secretly planned and clandestinely implemented, carry – as a final consequence, or even on purpose – a great probability of eventually being exposed to the public. The intended goal: the deception of the public or some of its members, causing certain ensuing actions, can’t be concealed forever. Therefore, since the professional disinformers know that their operation is valuable only during a given or limited period of time, they normally prepare an efficient exit strategy, including convincing deniability and credible cover-up schemes.

 

Another characteristic of digital disinformation is the sheer amount of data available today for that kind of operation. E.g. The Chinese government is estimated to post around 448 million social media posts a year. By employing large ‘opinion factories’ or ‘troll farms’, massive digital disinformation or fake news spread on social networks can immediately reach and create ‘alternative realities’ for millions of people at the same time – a unique opportunity already successfully exploited today by a number of major disinformation actors.

 

If successful, the cost-benefit factor for the disinformation actor is positive. In case of failure, however, it is a disaster, particularly if the actor is a government agency. In the worst case, especially with fake news being directed against his own people and the hoax uncovered, there is nothing that can save the responsible disinformer from being accused of manipulation and abuse. The resulting damage will be a long-lasting distrust and enduring lack of confidence in that type of institution, person or agency – too high a price to pay for actors in democratic countries or institutions, a cheap price, however, for the perpetrators of those countries where there is no confidence in the government or its representatives anyhow. That seems to be one of the reasons why, until now, disinformation actors in democratic countries are still reluctant to employ that kind of active measure for domestic purposes.

Countermeasures Against Digital Disinformation

Traditionally, due to their complexity, digital disinformation operations, especially those carried out by major intelligence services or powerful non-state actors, are extremely difficult or even impossible to detect and assess before or even after they become viral. That may have been the main reason, in the past, for them to be ignored or simply underestimated by most targets affected. Once detected, however, it is still not trivial to reliably attribute or trace the spreading of disinformation to a certain source or author, or even identify the individuals behind the operation.

 

To develop effective countermeasures against digital disinformation, we will have to analyze first the nature and the most significant variants of potential disinformation types.

The Whistleblower

Today, the appearance of leaked confidential information in the digital landscape has almost become normalcy of digital life. As part of the common effort to combat illegal activities, e.g. corruption, money laundering, fiscal fraud, organized crime, terrorist activities, and political, governmental and private misconduct infringing basic human rights, in many democratic countries whistleblowers compromising and exposing classified or confidential information have meanwhile achieved a socially and ethically accepted legal status, backed by pertinent legislation.

 

The ethical dilemma of the whistleblower revealing state secrets consists in the fact that he or she believes or feels morally obliged, to act against the law for the sake of higher-level principles. Sacrificing one’s loyalty devoted to the state to serve the people, – in democracies considered to be the supreme sovereign -, by illegally divulging to the public the wrongdoings committed by officials that are covered by law or, in some countries, even by special laws allowing for certain unethical activities to be performed by state-based intelligence services, is a pretty hard decision to make. The personal consequences for the whistleblower may be disastrous. 

 

The conflict line, however, between the justified interest of governments to maintain secrecy and the justified right of an open democratic society to be in the know, is a narrow road full of yet uncleared legal and social obstacles. Not every leak reflects the truth! Not every leaker is driven by the noble desire to save human society, the climate or mankind as a whole! Nor is he or she always aware of the fact that, although part of his leaks may be justified, the other part may jeopardize the lives of dozens of people compromised. In addition, the leak may also be a manipulated attempt by an adversary to misinform or to deceive. Counterintelligence-driven offensive digital disinformation analysts assess and judge these facts against that background, using powerful intelligent IT tools and proven techniques allowing them to take into account all possibilities. However, they are neither legislators nor judges. Whether legalistic or more liberal positions regarding the problem of whistleblowing will prevail is to be seen. Probably it might be decided when a broader awareness of the dangers involved will have grown out of the numerous cases of disinformation analyzed.

 

However, on the other side, those justified leaks of the kind frequently may again trigger a wave of digital disinformation campaigns staged by the protagonists exposed. They are usually aimed at blurring the motivation of the whistleblower, exposing him or her to all kinds and forms of discredit. The goal of that kind of operation is to destroy his or her credibility and personal integrity, and thus shedding doubts as to the quality, truth and trustworthiness of the information leaked.

The Sounding-Board Technique

Another category of disinformation operations are attacks against journalists, popular influencers, bloggers, freelancers, and other major online opinion spreaders, hoping to be able to use them as unwitting multipliers or amplifiers of the information and the data to be spread. Employing fake accounts on social networks, hiding their true identity and using general adversarial network (GAN) techniques to fix a credible social fake profile, the disinformers will try to trick their targets into becoming sort of a sounding board for their ideas and memes they want to spread. Quite often, the targets are lured into well-paid publishing or advertising contracts, after having been carefully selected according to certain personal or private weaknesses in their biography, or by exploiting temporary unemployment situations or other financial problems. Preferred fake-account covers are fictitious non-profit organizations, peace-loving antiwar groups, human rights activist groups, political opponents etc., thus confounding the nature of the actor and victim.

 

Because of the intricacy of the operations’ set-up, it will normally take some time before the operation is uncovered and analyzed, either by special disinformation journalists or state-based or private intelligence services. However, with the disinformation having been disseminated on a large scale around the world, its content being augmented by authoritative and trusted commentators, the damage is already done when the operation will be stopped.

 

How to prevent these operations is difficult to say. It will need a collaborative effort by a lot of experienced disinformation intelligence specialists, disinformation journalists and the targeted victims of digital disinformation operations in order to collect all the data needed for an informed analysis of each case, calling for a new way of cooperation between the intelligence community, journalists, scientists, law enforcement and victims.

Epilogue

This paper tried, far from exhausting the subject, to point to some major problems representing the core of the digital disinformation challenge for Intelligence. The growing digital universe with its countless ecosystems and billions of users will continue to be a serious challenge to intelligence communities of free and democratic countries around the world. Without top-notch digital technology and an increasing effort to develop even more sophisticated countermeasures, deception and influence operations by adversaries of the free world will prevail to the detriment of all free societies.

“You too must not count overmuch on your reality as you feel it today, since, like that of yesterday, it may prove to be an illusion tomorrow.” – Luigi Pirandello, Six Characters in Search of an Author.
Reduce Risks in Your Decision-Making Process

Copyright © 2020, Traversals Analytics and Intelligence GmbH. All Rights Reserved.

Disinformation Briefing
Share via
Copy link
Powered by Social Snap