Tackling Information Manipulation: A Methodological Approach to TTP Detection

09/07/2025

How a structured framework helps identify and counter the tools and tactics of disinformation actors.

Introduction

In today’s multi-dimensional information environment, detection and counteraction of Foreign Information Manipulation and Interference (FIMI) is more than a matter of technical fixes. It requires robust methodological foundations; one that can track evolving threat channels, analyze digital detection technologies, and distill insight into effective practitioner tactics on the ground. This blog presents a logical framework for understanding and responding to FIMI within the framework of Tactics, Techniques, and Procedures (TTPs) the behavioral signature of malign influence actors.

Knowledge of the Threat Environment

Effective response is grounded in a good understanding of the threat environment. Information manipulation campaigns rarely manifest in a regular pattern. Rather, they commonly blend coordinated inauthentic action, algorithmic amplification, narrative manipulation, and hybrid disinformation techniques which change over time. To begin building this understanding, it is essential to identify the main players and the digital platforms used. These include social media accounts, forums, botnets, sources of media, and encrypted communication networks. The aim is to know the digital footprints and strategic measures used in spreading disinformation, influencing public discussion, or eroding institutional trust. Additionally, understanding the broader context is essential, as disinformation campaigns are often shaped by current events, political tensions, or societal disruptions National and domestic news, geopolitical hotspots, and even natural disasters can trigger tailored disinformation campaigns. An effective threat mapping platform also includes sociopolitical indicators and sentiment data to forecast target weaknesses and preferred platforms for influence operations. This type of threat environment can hardly be fully understood through a single disciplinary lens. It requires an interdisciplinary approach, incorporating OSINT (opensource intelligence), behavioral science, data science, and interaction with civil society and journalism stakeholders, to build this knowledge.

Categorizing Tactics and Techniques

Once the environment has been mapped, the next priority is determining who is conducting these operations in terms of the TTPs. This action breaks down bad behavior into independent technical and behavior patterns. Some of these include coordinated posting by bots, cross-platform narrative diffusion, image or video manipulation such as deepfakes, and optimal timing to coincide with crises or elections. Other behaviors include hacking into social grievances or identity politics to create emotional responses and polarize audiences. Furthermore, language manipulation, satirical misrepresentation, and false metadata (such as forged timestamps or photoshopped source identifiers) have become potent tools in spreading misinformation. Classification also involves persistence and adaptability of strategy measurement, i.e., the way authors change platforms or rebrand themselves to evade detection. The RESONANT project uses the DISARM framework as a standardized taxonomy to classify such tactics, techniques, and procedures. This shared structure facilitates cross-stakeholder communication and coordinated detection efforts. The categorization process is further supported by the RESONANT Suite of Tools, which enables the detection and analysis of TTPs in operational contexts. ThisCategorizing Tactics and Techniques

classification permits analysts to move from independent incidents to pattern analysis. It provides the foundation for predicting likely threat routes and allows prioritization of resources and responses. Most importantly, it enables common language across technical, legal, and policy communities and is a requirement for concerted action.

Evaluating Digital Detection Tools

Technology steps to the forefront in the battle against disinformation, yet not all technologies are equal. A decent strategy is the creation of an evaluation framework to compare tools to real needs. This includes assessment of how reliably a tool can identify false or faked material, if it can be accessed by non-technical staff or public institutional actors, and if it can scale to process large collections of information near real-time. It also considers if the tool can be easily integrated into current workflows and if methodology and sources are clearly described and documented. Standards of evaluation must also consider explainability, interoperability, and GDPR compliance. Tools must be demonstrated to be resilient against adversarial tampering (e.g., evasion tactics and decoys) and provide audit trails for accountability. The tooling environment must be tested periodically again to ensure it reflects technological advancements as well as the evolving threat environment. Public private partnerships (PPP) and field testing with end users (journalists, LEAs, analysts) are needed to ensure usability and usefulness in real-world environments.

Grounding in Use Cases

Notably, methodology is not developed in void. It should be use-case directed, applied to real-world uses such as disinformation targeting elections, online influence operations against public health emergencies, or sophisticated campaigns targeting law enforcement or immigration policy. Each use case has unique requirements: speed of response, linguistic localization, cultural awareness, or compliance. The method is versatile enough to accommodate this using modular workflows that can be tailored to stakeholder environments by pairing TTP detection with concrete societal and security use cases, the approach ensures findings are not only theoretically sound but actionable to practitioners in the law enforcement, policy, or civil society domains. Use-case testing also increases validation and community confidence, in that the outcome is viewed in real-world effect as opposed to theoretical scoring. Feedback loops in the process allow for adaptive learning and promote sharing of knowledge across domains.

The Way Forward

With increasingly sophisticated FIMI threats, a reactive approach is not sufficient. A systematic, analytical approach, based on familiarity with TTPs and informed by the judicious evaluation of detection tools, is a path forward. This methodology empowers stakeholders with the ability to see noise rather than threat, and the digital tools to respond quickly, responsibly, and in concert. In the fight against disinformation, process matters, because effective strategy begins with effective analysis.

Organisation / Author: KEMEA