How AI is transforming the world of intelligence
Psy-OPS and Info-OPS in the Age of AI
Psychological operations (Psy-OPS) and information operations (Info-OPS) represent fundamental disciplines in the field of contemporary hybrid warfare, with the aim of influencing the perceptions, attitudes and behavior of groups, individuals and entire populations through a planned and strategic use of information. These activities, framed in the broader context of cognitive warfare, represent decisive tools for the conquest and maintenance of strategic superiority without necessarily employing kinetic means. In the current international scenario, characterized by growing geostrategic competition between great powers, the relevance of such operations is accentuated by the speed with which information can be disseminated globally and by the pervasiveness of digital technologies.
Traditional psychological operations are mainly based on the human capacity for analysis and interpretation, on intuition and on the deep knowledge of the cultural and social dynamics of the target populations. However, technological evolution and, in particular, the emergence of advanced artificial intelligence (AI) capabilities, are introducing profound changes in the methodologies and potential of Psy-OPS and Info-OPS. In fact, AI allows for unprecedented precision in recipient segmentation and personalization of the message, being able to analyze huge amounts of data in real time and identify behavioral patterns that are difficult to find through conventional methods.
According to a recent analysis by the European Union Institute for Security Studies, AI represents a real paradigm shift in the disciplines of information warfare, allowing a predictive capacity on the responses of target groups (targets) and enormously enhancing the effectiveness of influence campaigns. AI applications include the automatic generation of highly credible content, the use of advanced chatbots to influence online discussions, and the use of so-called deepfake technology in order to create convincing and manipulative audiovisual material that is difficult to identify and counter.
An emblematic example of this use is the activities attributed to the Russian Federation during recent international conflicts and crises, where the combined use of AI and information operations has highlighted the ability to amplify internal divisions in adversary societies, undermining cohesion and weakening public trust in institutions. In this context, Russian military doctrine has explicitly integrated the use of AI into its information strategies, confirming that this technology is now considered an indispensable force multiplier in operations of influence and psychological manipulation.
Conversely, the growing use of AI in Psy-OPS and Info-OPS also brings with it significant vulnerabilities and strategic risks: according to research published by the RAND Corporation, the automation of the production and dissemination of manipulative content through AI increases the risk of inadvertent escalations and complicates the management of deterrence in the information field, making it more difficult to attribute and identify the true intent behind the hostile information campaigns. In addition, the proliferation of these technologies among individuals and terrorist groups increases the possibilities of malicious and unregulated use, creating scenarios of instability and widespread threats that are difficult to manage even by highly organized military apparatuses.
IA and HUMINT: an unprecedented synergy
The integration between AI and HUMINT (Human Intelligence), i.e. intelligence derived from human sources, represents one of the most significant evolutions taking place in the panorama of information collection and analysis: the use of advanced technology in support of HUMINT activities does not aim to replace the human element, but to enhance its analytical and decision-making capabilities, optimizing processes and operational times through strategic and tactical support tools.
In contemporary intelligence disciplines, HUMINT remains central to the deep understanding of the intentions and perceptions of individuals and interest groups, providing information that is often impossible to obtain through technical means or SIGINT (Signals Intelligence). The processing and analysis of this information is extremely complex, requiring the processing of huge amounts of unstructured data which, if poorly managed, risk drastically reducing the timeliness and reliability of operational assessments.
AI intervenes in this context thanks to its intrinsic ability to analyze, correlate and interpret large volumes of raw data, significantly accelerating decision-making times and increasing the accuracy of analysts' assessments. In particular, Natural Language Processing (NLP) techniques are applied to the analysis of HUMINT relationships in order to quickly identify patterns (recurring patterns), inconsistencies or elements of strategic interest often hidden in huge amounts of information.
AI can also assist in the identification and evaluation of human sources themselves, analyzing their reliability through predictive models based on past behaviors, cross-checks and psychological evaluations supported by advanced algorithms. This process increases operational efficiency and makes it possible to identify potential vulnerabilities at an early stage, such as the possibility of double-crossing, manipulation or disinformation.
Even in the recruitment and human resources management phase, the predictive capacity offered by AI helps to identify suitable profiles more precisely, anticipating operational criticalities and facilitating the dynamic management of information networks. A recent publication by the US Defense Intelligence Agency (DIA) underlined how AI tools are now systematically used to monitor in real time behavioral signals that could indicate the betrayal or failure of a source, thus allowing preventive interventions that protect both the sources themselves and the operational integrity of the mission.
Cognitive warfare aided by AI
Cognitive warfare represents an evolved form of conflict that aims to conquer and control the perception, opinion and decisions of the adversary through highly sophisticated informational-psychological strategies. With the help of AI, the ability to effectively wage cognitive warfare has grown exponentially, as the technology allows it to analyze massive amounts of data, identify behavioral patterns, and influence decisions in real-time and at scale.
The main feature of cognitive warfare in the digital age is the possibility of exploiting advanced machine learning and deep learning algorithms for detailed psychological profiling of individuals and social groups: tools that make it possible to identify cognitive and emotional vulnerabilities in an extremely precise way and to develop personalized information campaigns, designed to maximize manipulative effectiveness and influence on public perception.
A recent study conducted by the NATO Strategic Communications Centre of Excellence highlighted how advanced AI-based techniques have been used in influence operations to manipulate political opinions, exacerbate social polarizations and destabilize democratic processes in various European and North American countries. These operations exploit networks of artificial accounts (digital profiles) managed by algorithms that spread targeted messages and manipulative content, producing an amplified psychological impact that is difficult to counteract through traditional communication strategies.
A paradigmatic example of AI-assisted cognitive warfare concerns the use of deepfake technologies, capable of generating falsified realistic audiovisual content: technologies recently identified as a powerful tool in the hands of hostile actors to create false or distorted narratives capable of undermining public trust in institutions and political leaders, amplifying suspicions and internal divisions. In addition, algorithms capable of producing synthetic texts (including GPT-4) now make it possible to generate highly credible and contextualized content, which is difficult to distinguish from that produced by humans.
According to a report by the Rand Corporation, cognitive warfare conducted through AI also presents the problem of attribution: the origin of information operations is often hidden behind complex networks of virtual actors and intermediaries, making it extremely difficult to trace the identity of the real instigator: this limits the possibilities of response by states and favors scenarios of involuntary escalation.
Conclusions
In response to emerging risks, numerous national and international institutions are exploring mitigation strategies based on algorithmic transparency, information awareness education and international cooperation aimed at defining ethical norms and rules of engagement that limit abuses in cognitive operations AI-based solutions. Nevertheless, the speed with which the technologies involved evolve makes it difficult for political and military authorities to keep up with the necessary regulation and doctrinal updating.
The challenge for the intelligence apparatus and for political decision-makers will therefore be to anticipate and govern technological evolution, rather than passively suffer its consequences, thus protecting global geopolitical stability.
References
· Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., ... & Amodei, D. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. arXiv preprint arXiv:1802.07228.
· Fiott, D., & Lindstrom, G. (2020). Artificial Intelligence and EU Defence: A new paradigm for strategic autonomy?. European Union Institute for Security Studies (EUISS), Brief No. 3.
· Kello, L. (2022). Cyber Threats, Influence Operations, and Artificial Intelligence: The New Frontier of Strategic Competition. Survival, 64(1), 7-32.
· Mazarr, M. J., Bauer, R. M., Casey, A., Heintz, S., & Matthews, L. J. (2019). Hostile Social Manipulation: Present Realities and Emerging Trends. RAND Corporation.
· Moliner, C. (2021). Cognitive Warfare: The Mind is the Battlefield. NATO Strategic Communications Centre of Excellence.
· Pamment, J., Bay, S., Dencik, L., & Hedling, E. (2021). Influence Operations and Information Warfare: Assessing Risks and Opportunities. Routledge.
· Defense Intelligence Agency. (2021). Annual Threat Assessment Report. DIA Publications.
· Horowitz, M. C. (2021). Artificial Intelligence, International Competition, and the Balance of Power. Texas National Security Review, 4(2), 37-57.
· NATO Science and Technology Organization (STO). (2022). Artificial Intelligence in Military Intelligence and Surveillance. NATO Publications.
· Waltzman, R., & Shen, J. (2020). The Role of Artificial Intelligence in Enhancing HUMINT Operations. RAND Corporation.
· Bendett, S., & Kania, E. (2022). The AI Future of Warfare: Strategic Implications of AI and Machine Learning. Center for a New American Security (CNAS).
· DiResta, R. (2022). Deepfakes and Cognitive Warfare: Emerging Threats and Countermeasures. Brookings Institution.
· NATO Strategic Communications Centre of Excellence (2021). Cognitive Warfare: The Future of Warfare. NATO StratCom CoE.
· Vilmer, J. J., Escorcia, A., Guillaume, M., & Herrera, J. (2018). Information Manipulation: A Challenge for Our Democracies. French Ministry for Europe and Foreign Affairs.






