ARTIFICIAL INTELLIGENCE IN INTELLIGENCE OPERATIONS IN FINLAND
The Anatomy of AI-Assisted Influence
A structural analysis based on public sources
2.3.2026
I. Introduction
The article The Invisible Guardian, published on 20 February 2026, mapped Finland's military intelligence domestic operating environment: the legislation, methods, the legal standing of civilian intelligence operatives, and the limitations of oversight. It ended with the question of whether trust has replaced oversight.
This article picks up where the previous one left off.
Artificial intelligence has fundamentally transformed intelligence operations. Profiling that once required months and dozens of analysts now happens in hours. An influence operation that once demanded an extensive media network can now be tailored to a single target in real time. Psychological pressure that was once crude and recognisable is now subtle, personalised, and nearly invisible.
Finland's intelligence law speaks of telecommunications interception, GPS tracking, and undercover operations. It does not speak of algorithmic profiling, personalised media manipulation, or AI-assisted psychological influence. The law was written for a world that no longer exists.
This article opens up the new reality that has replaced the old one. It does so based on public sources. It contains no classified information and describes only structures, drawing on publicly available material. In this article, we conduct a structural analysis of the Ano Turtio case mentioned in The Invisible Guardian, within the framework of that article.
It is left to the reader to assess whether the structures described in this article are theoretical or already in use.
II. AI in Intelligence Operations: The International Framework
The use of AI in intelligence operations is not a future scenario. It is the present — documented and publicly acknowledged.
The United States Department of Defense established the Joint Artificial Intelligence Center (JAIC) in June 2018, tasked with integrating AI across all branches of the military. In February 2022, JAIC was replaced by the Chief Digital and Artificial Intelligence Office (CDAO), which received broader authority. In January 2026, Secretary of Defense Pete Hegseth released the new AI Acceleration Strategy, setting the goal of "AI-first" combat capability and defining seven so-called Pace-Setting Projects for accelerated AI deployment. The Pentagon requested 1.8 billion dollars for AI and machine learning in its 2024 budget. By 2021, the Department of Defense already had over 685 active AI projects according to a GAO report.
The United Kingdom's signals intelligence organisation GCHQ published a report in February 2021 titled "Pioneering a New National Security: The Ethics of Artificial Intelligence", in which it stated that AI is "a critical issue for the security of the United Kingdom in the 21st century." GCHQ's then-director Jeremy Fleming stated that AI is "already indispensable in many operations" and enables analysts to work with vast volumes of data. In the 2025 Spending Review, the UK allocated an additional £0.6 billion in funding for its intelligence services (MI5, SIS, and GCHQ) through 2028–29.
Israel's Unit 8200 is one of the world's most renowned signals intelligence units. Its AI capability is documented in multiple public sources. The unit has developed AI-based systems that analyse vast quantities of communications data, identify patterns, and generate target profiles automatically. In connection with the Gaza conflict, international media reported that Israel used AI systems named "Lavender" and "Gospel" for target identification.
The European Parliament has published several reports on the military use of AI. The EU's AI regulation (AI Act), which came into force in 2024, explicitly excludes military use from its scope of application. The significance of this legal gap is discussed in greater detail in Chapter XI.
NATO established the DIANA programme (Defence Innovation Accelerator for the North Atlantic) in 2022, tasked with accelerating the deployment of AI and other emerging technologies in defence. The initiative was agreed upon at the 2021 Brussels Summit and its charter was adopted at the 2022 Madrid Summit. Finland is a member of DIANA.
China's social credit system provides a point of comparison for where AI-based citizen profiling can lead at a state level. The system combines economic behaviour, social media activity, movement data, and government registry data into a single score that determines a citizen's rights and opportunities.
Five Eyes intelligence community (United States, United Kingdom, Canada, Australia, and New Zealand) AI cooperation is publicly documented. The community shares intelligence and develops joint AI tools. Finland is not a Five Eyes member, but NATO membership connects Finland to a broader intelligence-sharing network.
A historic shift has taken place. Human-driven analysis — where a trained analyst read reports and formed an assessment — has been replaced by machine-driven profiling, where AI processes millions of data points and produces results in seconds. This change affects the scale, speed, and precision of targeting in ways that no legislation has yet adapted to.
Sources: Pentagon AI Acceleration Strategy (January 2026); CDAO (Wikipedia); GCHQ, "Pioneering a New National Security" (2021); UK Spending Review 2025; EU AI Act (2024); NATO DIANA; GAO AI Report (2022).
III. The Anatomy of Target Profiling: How AI Builds a Profile of a Person
The profiling capability of AI available to an intelligence organisation differs fundamentally from what commercial AI can do. The difference is in the data.
Commercial AI sees what a person has published. Intelligence AI sees everything.
An intelligence authority's powers grant access to data sources that a private individual cannot obtain. These include the target's emails, instant messages, and text messages. Call records and metadata: who called whom, when, for how long, and from where. Web browsing and search history. Social media content, including deleted content. Location data and movement records. Medical history and health data. Financial data: income, debts, account numbers, transactions. Government records: education and training history, employment history, military service records, police records, enforcement records, and tax authority data.
This data is not merely a list. It is an entire person reduced to precise data points.
Academic research has demonstrated what can be inferred from a digital footprint alone. Cambridge University researchers Michal Kosinski, David Stillwell, and Thore Graepel demonstrated in a 2013 study that an AI model could predict a person's sensitive characteristics — such as sexual orientation, political views, and personality traits — based on Facebook likes (PNAS, 2013). In a 2015 follow-up study, Youyou, Kosinski, and Stillwell showed that the model could assess a person's personality more accurately than their colleagues (10 likes), family members (150 likes), and ultimately more accurately than their spouse (300 likes) (PNAS, 2015). These studies used only public likes. No messages. No health data. No government records.
The Cambridge Analytica scandal demonstrated in 2018 how this research was applied in practice. The company's CEO Alexander Nix presented to clients in detail how psychological profiles were used for political influence. Christopher Wylie, the company's former research director, described a system that built a "psychological weapon" from Facebook data. This was possible using social media data alone.
In the hands of an intelligence organisation, the scope of data is an order of magnitude greater. The model does not merely know what a person thinks publicly. It knows what they write in their private messages. It knows whom they call in the middle of the night. It knows what they search for when no one else is watching. It knows their diagnoses, their debts, their political history, their military service record.
From this data, AI constructs a profile that encompasses cognitive abilities and thinking styles, motivational structure, fears and vulnerabilities, decision-making patterns, stress responses and coping strategies, relationship dynamics, ideological commitment and its depth, the impact of health on functional capacity, financial pressure points, and predictability and response patterns.
The profile is not a static document. It is a dynamic model that updates in real time as new data comes in. It predicts the target's likely response to various situations. It identifies the points where the person is most vulnerable. It recommends approaches most likely to produce the desired outcome.
DARPA's SocialSim programme has publicly documented how social network modelling can be used to simulate the spread of information and predict the behaviour of individuals and groups. The programme was developed explicitly for intelligence use.
The accuracy and comprehensiveness of profiling depend on the available data. The more data, the more accurate the profile. In the hands of an intelligence authority, the constraints on data are primarily legal, not technical.
Sources: Kosinski, Stillwell & Graepel, "Private Traits and Attributes Are Predictable from Digital Records of Human Behavior" (PNAS, 2013); Youyou, Kosinski & Stillwell, "Computer-based personality judgments are more accurate than those made by humans" (PNAS, 2015); Wylie, Christopher, "Mindfck: Inside Cambridge Analytica's Plot to Break the World" (2019); Kaiser, Brittany, "Targeted" (2019); DARPA SocialSim Program.*
IV. The Digital Architecture of an Influence Operation
A profile is a tool. What is done with it depends on the mission.
In an intelligence operation, the mission brief is the decisive factor. The same target, the same data, the same profile can be placed within two entirely different frameworks — and the results are fundamentally different.
Constructing the Intelligence Framework
AI does not decide who is a threat — at least not yet. A human decides. AI executes.
The operator formulates the mission brief: the target's name, available data, the objective of the operation, and its parameters. This framework determines everything the model produces. From a target framed as a security threat, the model produces a threat assessment. From the same target framed as a political actor, the model produces an influence analysis. From a target framed as a potential recruit, the model produces an approach strategy.
Same data. Different framework. Entirely different conclusions.
This is the fundamental problem of AI-assisted intelligence: the choice of framework is a human decision that takes place before any model produces a single result. It is also the stage where biases, political pressures, and organisational interests exert the greatest influence. The model does not question the framework. It executes it.
Strategic Planning
Based on the mission brief, AI produces a multi-layered strategy for neutralising the target. The strategy typically divides into psychological, social, institutional, digital, and physical levels. Each level supports the others, and no single element can be proven to be coordinated on its own.
This is the core of the structure: coordination that looks like coincidence. A single tax audit is a normal government action. A single negative article is normal media criticism. A single social media account restriction is normal platform moderation. Together, they form a whole whose existence cannot be proven by examining any single part in isolation.
AI enables the planning and coordination of such a whole in a way that would previously have required the cooperation of dozens of people and months of preparation. The model produces escalation steps, timelines, and contingency plans. It predicts the target's likely reactions and adapts the strategy accordingly.
Exploiting the Psychological Profile
Concrete influence actions are derived from the profile. Identifying vulnerabilities is at the heart of the process: family relationships, financial pressures, professional identity, religious conviction, and social network are all potential pressure points.
The model recommends different strategies for different profile types. A financially vulnerable target responds to different pressures than an ideologically motivated one. A socially isolated target requires a different approach than a well-connected public figure. The model analyses these variables and produces a personalised strategy.
Personalisation is the decisive change from traditional intelligence operations. Previously, strategies were general templates applied to the target. Now, the strategy is built from the ground up around a single target's profile.
Linguistic Framing: The Architecture of Indirect Influence
This is an area of intelligence operations not previously described in public literature within an AI context. It deserves particular attention because it represents a form of influence that is nearly impossible to detect in ordinary interaction.
Traditional propaganda studies focus on what is said: claims, arguments, narratives. The most effective influence, however, operates on a different level. It operates in the structure of language, not its content.
A linguistic model developed from the work of Milton H. Erickson describes the phenomenon in which the most influential part of a message is not what is said directly, but what the statement must presuppose in order to be true. These are called presuppositions.
A simple example: the sentence "Have you already noticed how this situation is affecting you?" contains several hidden assumptions. It assumes that the situation the target is currently in truly affects the listener. It assumes that the effect has already begun. It assumes that the listener will notice it. None of these assumptions are spoken aloud, and the listener does not typically evaluate them consciously. Their mind focuses on the actual question — "have you noticed" — while the presuppositions slip past conscious awareness.
It makes no difference in what context the sentence is presented. It could be fed into the target's news feed as an advertisement, say, with a bright image of an electric car seemingly promoting electric vehicles as a good alternative to rising fuel prices. The advertisement presupposes that rising prices have personal significance for the reader. It assumes the reader is looking for a solution. It frames the electric car as "the answer," even though the actual presupposition targets the claim that "the situation" truly affects the reader and that something must be done. The reader consciously processes the surface message of the advertisement — the electric car — while the hidden assumptions bypass critical evaluation.
Presuppositions operate on multiple levels. Simple presuppositions contain one hidden assumption. Complex presuppositions embed multiple assumptions within the same structure, making it impossible for the recipient to consciously process all of them simultaneously. Temporal presuppositions use words such as "before," "after," and "while," which assume the inevitability of certain events. Presuppositions can also be stacked, so that the surface level is obvious, but a second and even third level lie hidden beneath the first assumption, operating on the subconscious. By employing presuppositions, seemingly genuine cause-and-effect relationships can be created in the target's mind when operating at a sufficiently subtle level, with messages and other supporting actions timed for maximum impact.
In an intelligence context, this mechanism combines with algorithmic content steering. Algorithms can be optimised to expose the target to specific content, but the actual influence is performed by the linguistic structure of that content. If someone wanted to steer the target in a particular direction, simply showing positive content would be crude and detectable. Instead, an advertisement for apartments on Spain's sun coast fed into the target's news feed can implicitly presuppose that their current environment is problematic, that leaving is the solution, and that relief can be found elsewhere. The advertisement says none of this out loud. It talks about apartments, but its presuppositions speak of escape. AI can produce such structures systematically and tailored to the target's profile, without the operator needing any understanding of deep linguistic influence.
Alongside presuppositions, another key technique is deliberate ambiguity. The vagueness of an expression is not carelessness. It is by design. The recipient's mind fills the ambiguity with their own meanings, making the message feel personally resonant without the sender having actually said anything concrete.
Combined with presuppositions, ambiguity creates messages that simultaneously assume something to be true while leaving the precise content for the recipient's subconscious to fill in. The recipient feels they have made their own interpretation, even though the direction of that interpretation was already built into the structure of the message.
In official communications, institutional contacts, or messages relayed through third parties, these techniques converge: the linguistic structure steers the target's thinking without direct commands, while ambiguity leaves the precise content for the target to fill in themselves. The target does not recognise the influence because the message looks like normal official communication. An AI model can tailor these structures in real time based on the target's profile: using precisely those ambiguities that resonate with the target's fears and precisely those presuppositions that steer thinking in the desired direction.
This is a form of influence that traditional media studies does not recognise. It leaves no traces. It requires no lies. It uses truth as a structure within which assumptions are hidden — assumptions the recipient never consciously evaluates.
Sources: Erickson, Milton H. and Rossi, Ernest, "Hypnotherapy: An Exploratory Casebook" (1979); Bandler, Richard and Grinder, John, "Patterns of the Hypnotic Techniques of Milton H. Erickson, M.D." (1975); Cialdini, Robert, "Influence: The Psychology of Persuasion" (2006); Jamieson, Kathleen Hall, "Cyberwar: How Russian Hackers and Trolls Helped Elect a President" (2018).
V. Why Would This Be Done? The Historical Evidence
The preceding chapters have described how AI-assisted profiling and influence work structurally. They have not answered the more important question: why would an intelligence organisation direct these methods at its own citizen?
The question is not hypothetical. There is a historical answer — documented and studied, from multiple countries and multiple decades. The answer is not one cause but five distinct dynamics, which often operate simultaneously.
Political Threat Disguised as Security Threat
The United States Federal Bureau of Investigation (FBI) operated the COINTELPRO programme from 1956 to 1971, initially targeting the Communist Party but rapidly expanding to encompass the civil rights movement, anti-war groups, feminist movements, the Native American movement, and student organisations. FBI Director J. Edgar Hoover ordered his agents to "expose, disrupt, misdirect, discredit, or otherwise neutralize" the activities of these movements.
The United States Senate investigative committee, known as the Church Committee, stated in its 1975 report that the FBI's motivation was not national security but "the maintenance of the existing social and political order." The programme's targets included Nobel laureate Martin Luther King Jr., about whose private life the FBI gathered material and to whom it sent a letter implying he should commit suicide. The Church Committee concluded that the methods used would have been intolerable in a democratic society even if all targets had been involved in violent activity — but COINTELPRO went far beyond that.
The United Kingdom's police operated covert undercover units from 1968 to 2011 that infiltrated over a thousand political groups. According to the Undercover Policing Inquiry report published in 2023, police infiltration was legally justified in only three of over a thousand groups. The rest were organisations engaged in lawful political activity: environmental movements, trade unions, and anti-racism campaigns. Undercover officers entered into long-term sexual relationships with their targets, fathered children under false identities, and stole the identities of dead children. The Metropolitan Police admitted in 2015 that the practices were "a gross violation" and apologised.
In both cases, the official justification was security. The real motive was political: the targets challenged the existing order.
Organisational Self-Preservation and Bureaucratic Inertia
The East German Ministry for State Security, the Stasi, codified the Zersetzung method in 1976 (Directive 1/76), whose purpose was to "decompose, paralyse, disorganise, and isolate hostile and negative forces." The Stasi shifted from open persecution to psychological warfare in the 1970s because direct suppression was drawing international criticism. The new method was invisible: most victims never knew who was causing their problems.
The mechanism of Zersetzung is particularly significant in the context of this article. The Stasi built a detailed psychological profile of each target — a so-called "psychogram" — that mapped the person's vulnerabilities: family relationships, professional ambitions, health conditions, sexual orientation, financial pressures. Based on this profile, a personalised operation was designed targeting precisely those points where the target was weakest. Methods included spreading rumours, blocking career advancement, sabotaging personal relationships, damaging property, and deliberately incorrect medical treatment.
Historian Mike Dennis estimated that between 1985 and 1988, the Stasi opened 4,500 to 5,000 new operational cases against individual persons annually. The international anti-torture organisation estimated the total number of Zersetzung victims at 300,000 to 500,000.
Why did the Stasi do this? Partly for political reasons, but partly because it was an organisation that needed targets to justify its existence. At its peak, the Stasi had 91,000 employees and an estimated 170,000 to 500,000 unofficial collaborators. One in three East Germans was either under surveillance or an informant. A machine built to find threats finds them — because its existence depends on it.
The International Detour
The Five Eyes intelligence community (United States, United Kingdom, Canada, Australia, and New Zealand) forms the world's most extensive intelligence-sharing network. Documents leaked by Edward Snowden in 2013 revealed that Five Eyes countries systematically surveilled each other's citizens and shared the collected intelligence. The mechanism circumvents national legislation: Country A may not surveil its own citizens, but Country B can do it on Country A's behalf and share the results.
Canadian federal judge Richard Mosley condemned in 2013 the Canadian Security Intelligence Service's (CSIS) practice of outsourcing the surveillance of Canadian citizens to foreign partner agencies while keeping domestic courts in the dark. According to Privacy International, the secrecy surrounding Five Eyes arrangements enables arbitrary or unlawful interference with the right to privacy that circumvents the constraints of national legislation. A UN human rights office report stated that states' efforts to coordinate surveillance practices in order to circumvent national legal safeguards are unlawful.
Finland is not a Five Eyes member, but NATO membership connects Finland to a broader intelligence-sharing network. Nine Eyes cooperation (Five Eyes plus Denmark, France, the Netherlands, and Norway) and the 14 Eyes network (Sweden, Germany, Belgium, Italy, and Spain) extend into Finland's immediate neighbourhood. The question of whether a similar detour dynamic could also function within NATO's intelligence structures remains open.
Lowering the Threshold
The Stasi needed dozens of people and months of preparation for a Zersetzung operation. The FBI's COINTELPRO required an extensive field organisation. The UK's undercover police operations demanded years-long infiltrations.
AI changes this equation fundamentally. Profiling that required an analyst team's months of work now happens in hours. A psychological profile whose construction demanded personal contacts and field intelligence is now generated from data automatically. A personalised influence strategy that previously required the expertise of an experienced operator is now a model-generated recommendation.
Lowering the threshold means an operation can be launched on lighter grounds, with fewer personnel, and with less risk of exposure. In East Germany, the Stasi needed one secret police officer for every 166 citizens. In the age of AI, that ratio is meaningless, because a single operator can manage the simultaneous profiling and influencing of dozens of targets.
Classification Creep and Systemic Self-Reinforcement
COINTELPRO began as surveillance of the Communist Party and expanded within a decade to cover feminist movements, environmental activists, and civil rights actors. The UK's undercover police operations started with protests against the Vietnam War and expanded to trade unions, environmental groups, and even the justice campaign of murder victim Stephen Lawrence's family. The Stasi started with political dissidents and expanded to cover punks, environmental activists, and peace groups meeting in church circles.
In every case, the same dynamic repeats: once an operation is launched, it generates data; the data generates interpretations; the interpretations generate justifications for expanding the operation. A target initially classified as "to be monitored" is later reclassified as a "potential threat" and eventually as an "active threat." Each reclassification opens new operational authorities. The system feeds itself.
AI intensifies this dynamic. A model designed to identify threats optimises itself to find them. If the system's success is measured by the number of identified threats, it produces threats by the same logic that medical overdiagnosis increases as screening capacity grows. The number of findings increases regardless of whether the number of actual threats increases.
Who Decides?
The historical evidence shows that the decision to direct an operation against a citizen is not a single moment but a chain. In COINTELPRO, Hoover issued the general directive and field offices executed it independently, often without headquarters knowing the details. In the Stasi, operational plans were drafted at regional departments and approved at the supervisory level, but political direction came from the party leadership. In the UK's undercover police operations, Special Demonstration Squad chief Bob Lambert admitted that the unit operated as "a black operation that practically no one knew about and that only the police had authorised."
AI fragments this decision chain further. An operator formulates the mission brief. A model produces the profile. Another analyst interprets the results. A third level approves the strategy. A fourth executes it. Each stage is a separate decision, and no single decision-maker necessarily sees the whole picture. Responsibility dissolves into the structure.
The Church Committee identified this dynamic as early as 1975: the gravest derelictions of duty were not those of field agents but of senior leadership, whose task it was to oversee intelligence operations and who systematically failed to ensure compliance with the law. Half a century later, the structure is the same. The tools have evolved.
Sources: FBI COINTELPRO (FBI Vault); Church Committee, "Intelligence Activities and the Rights of Americans" (1976); Zersetzung (Wikipedia); Dennis, Mike, "The Stasi: Myth and Reality" (2003); Behnke, Klaus, "Zersetzungsmaßnahmen" (1998); Undercover Policing Inquiry (UK); Privacy International, Five Eyes; Snowden, Edward, "Permanent Record" (2019); Mosley, Justice Richard, Federal Court of Canada (2013).
VI. Media Environment Manipulation: Personalised Information Influence
Profiling produces knowledge. Strategy produces a plan. Media environment manipulation executes it in the target's everyday reality.
AI-assisted media manipulation does not mean publishing fabricated news. It means reshaping the target's information environment so that the target themselves draws the conclusions the operation's designer wants them to draw.
Algorithmic content targeting is the basic tool: search engine results, news feeds, and social media visibility are not neutral. They are produced by algorithms, and algorithms can be influenced. Search engine results can be weighted toward certain content. News feeds can be shaped so that certain types of content are emphasised. Social media visibility can be steered so that the target is repeatedly exposed to specific narratives.
Targeted advertising takes this further. Algorithm-driven advertisements can "speak" directly to the target's current life situation. An advertisement offering debt comparison to a person in financial distress. An advertisement offering security services to a person who fears surveillance. An advertisement offering relationship counselling to a person whose relationship is under strain. These are not coincidences. They are based on the collected profile, and they resonate with the target's current fears.
Direct communication channels expand the sphere of influence even further. Marketing messages, sales calls, text messages, and emails appear to be random commercial communications. Their content, however, is selected to resonate with the target's situation, and their timing can be coordinated with other pressure mechanisms. Examined individually, each message is normal marketing. Together, they form an information framework that reinforces the target's uncertainty.
The most effective form of influence, however, is not targeted advertising or communication. It is targeted information that triggers, for instance, a radicalisation process.
Feeding the right data at the right point to the right person can trigger a process that appears entirely organic. Susceptibility points are identified from the target's profile: distrust of institutions, perceived injustice, social isolation, or ideological sensitivity. Information is fed into these points — information that is in itself true or partially true. It is, however, selected and timed to reinforce the desired narrative in the target's own thinking. The process is effective precisely because the target feels they have drawn their own conclusions.
The Turtiainen case provides a concrete and recent frame of reference for this dynamic. According to MTV Uutiset and Yle, Turtiainen stated in his asylum video that "friends from the Russian side" had urged him to leave Finland immediately due to government actions. Public sources do not reveal who these "friends" were or whether they were acting as part of a broader influence operation. This article does not claim that this is what happened, but within the model of information influence, this is precisely how a target's decision-making would be steered: a "tip" relayed through a third party, based on the target's pre-existing fear, leading to a decision the target experiences as their own. The broader arc of Turtiainen's media framing and its dynamics are examined in more detail in Chapter X.
Narrative planting is the long-term form of media manipulation. An organic-looking climate of opinion is built around a single individual, gradually. Individual articles, comments, discussion threads, and social media posts form a whole in which "public opinion" about the target appears to have formed naturally.
One particularly subtle form of influence is constructing an environment in which the target's truthful observations appear to outsiders as signs of mental illness. "The Invisible Guardian" described this phenomenon as "induced disclosure": a situation in which the target is pushed to react to the pressure they experience in a way that looks irrational to an outside observer. The more the target describes their actual situation, the less credible they appear. This is a trap whose effectiveness depends on the outsider not seeing the full picture.
Sources: DiResta, Renée, Stanford Internet Observatory; Oxford Computational Propaganda Project; EU EEAS reports on information environment manipulation; Eisenstat, Yael, research on algorithmic radicalisation; MTV Uutiset; Yle.
VII. Institutional Pressure Enhanced by AI
A single government action is a normal part of the rule of law. The tax authority audits taxes. Social services assess a family's situation. Police investigate reports. The enforcement authority collects debts. Each of these is a lawful, justified, and independent action.
In an AI-assisted influence operation, the timing and frequency of these actions can be coordinated.
Different government agencies each have their own powers and their own reasons for contacting a citizen. Social services, the tax authority, police, the enforcement authority, health services, the employment authority, insurance companies: each is an independent actor with lawful grounds for its own actions. Examined individually, every contact is a normal government action.
An AI model can, however, analyse which government channels are the most effective pressure points in the target's situation. It can model how the simultaneity of different government actions affects the target's psychological state. It can recommend timing that maximises the overall impact. The result is a situation in which the target faces multiple simultaneous government processes, each of which is entirely lawful on its own. No one is officially coordinating the whole, and no single authority necessarily knows about the others' actions.
Documentation control is another dimension. AI can help construct a narrative that is consistent across multiple authorities. The same basic story about the target: financially unreliable, socially deviant, politically radical, or psychologically unstable. This narrative does not require coordination between authorities. It requires only the feeding of consistent information into different channels.
The Turtiainen case can illustrate how an institutional narrative is constructed. Apu magazine columnist Anne Moilanen wrote that no one has persecuted Turtiainen in Finland other than the enforcement officer, and listed his enforcement debts, criminal convictions, and lost positions of trust. This totality forms a framework in which all of the person's statements are automatically interpreted as the reactions of someone who has "reached a dead end." The framework is self-sealing: the claim of persecution is explained away as bitterness; the presentation of evidence is dismissed as paranoid interpretation. It rejects everything that contradicts it.
This mechanism is not new. The East German Stasi called it Zersetzung: the systematic destruction of a person's reputation and credibility using the resources of the state apparatus. The FBI's COINTELPRO programme used similar methods in the 1960s and 1970s. AI has fundamentally changed the scale of these mechanisms: coordination is faster, personalisation more precise, and the trail of evidence smaller.
VIII. Targeting the Inner Circle: Relationships as Objects of Influence
No person is an island. Every individual is part of a network: a possible intimate relationship, friends, family members, colleagues, and a wider circle. An AI-assisted influence operation targets this network because the network is both the target's resource and their vulnerability.
AI-based profiling is not directed solely at the primary target. It extends to their close ones as well. Each member of the inner circle has their own vulnerabilities, fears, and motivations mapped, because they are potential pressure points. If the target is in a relationship, their partner is profiled separately. Friends are assessed for loyalty and susceptibility to influence. Among family members, those through whom pressure is most effectively transmitted are identified.
The pressures directed at members of the inner circle can be independent of each other yet simultaneous. One close contact faces pressure at their workplace. Another receives contact from an authority. A third sees social media content that raises suspicions about the target. None of them knows the full picture. Each reacts from their own perspective. The result is the weakening of the support network from multiple directions simultaneously.
Using close ones as leverage is one of the oldest forms of influence. Government contacts with the target's inner circle — whether a social services enquiry, a health services query, or another government action — affect the target indirectly. The target knows that their close ones are being contacted. It creates pressure that is not directed at the target themselves but at their sense of security in their relationships.
The erosion of trust relationships is the long-term objective of the process. In the Stasi's Zersetzung documents, this was called "decomposition of the social network." The goal is not to prevent contact but to erode trust. Elements of uncertainty introduced through third parties chip away at trust gradually: rumours, insinuations, a "concerned" message from a friend who "heard something." AI enables this in a personalised and real-time manner, individually tailored for each member of the inner circle.
The manipulation of digital communications is a theoretical but technically feasible form of influence. A so-called man-in-the-middle attack refers to a situation where a third party gains access to the communication between two people. Subtle alterations — such as delaying messages, modifying context, or changing tone — can create asymmetry and distrust between the parties. Each party sees the other's messages slightly differently from how they were sent, and neither knows about the alteration.
This is technically surprisingly simple to execute if the target does not use end-to-end encrypted communication channels. Most people do not think about the security of their digital communications in daily life. Text messages, emails, and many instant messaging applications are vulnerable if the third party has sufficient technical resources and authority.
IX. Digital Elimination: From Visibility to Invisibility
The methods described above target the individual directly or through their inner circle. Digital elimination targets the individual's public existence.
The goal is not censorship. It is invisibility.
Search engine visibility manipulation is the basic tool. Negative content optimisation means that positive or neutral search results related to the target are pushed down while negative results are pushed up. This does not require the search engine's cooperation. It requires only a sufficient volume of optimised content to fill the search results.
Service provider pressure is another dimension. The target's website hosting service can be pressured on the basis of terms-of-service violations. Complaints can be filed with the domain registrar. Reports can be submitted to the social media platforms the target uses, leading to account restrictions or shadowbanning — where the target's content appears normal to them but is invisible or difficult to find for other users.
Blocking content tools extends the influence to the target's publishing capacity. Banning or restricting the use of AI services and technical tools on terms-of-service grounds can limit the target's ability to produce and publish content.
AI-assisted publishing introduces an entirely new dimension. AI-generated articles, blogs, and discussion threads can build a negative narrative about the target in a way that looks organic and like independent opinion formation. The content is not propaganda produced by a single entity. It is material produced by dozens of seemingly independent sources that all arrive at the same conclusion.
Bot networks and discussion forums amplify this process. Automatically generated discussion traffic reinforces the desired narrative about the target. AI produces believable personas that participate in discussions from different angles and in different styles. One "commenter" is a concerned citizen. Another is a former colleague. A third is an anonymous official. A fourth is a researcher citing their own experience. Each one is fabricated. Each one arrives at the same conclusion.
The result is a situation where the target's website exists, their social media accounts exist, and their content exists. Nobody simply finds it. Those who do find it encounter a pre-built negative narrative that explains the target away before they have said a single word.
Sources: Bradshaw, Samantha and Howard, Philip, "The Global Disinformation Order" (Oxford Internet Institute, 2019); DiResta, Renée, "The Digital Maginot Line" (Stanford Internet Observatory); EU EEAS reports; Nimmo, Ben, research on bot detection.
X. Escalation Dynamics: The Power of Narrative
The methods described above are not static. They adapt to the target's reactions. This adaptability is what makes an AI-assisted influence operation particularly effective: it learns about the target in real time and adjusts its strategy accordingly.
At a general level, escalation dynamics follow a recognisable pattern. Pressure begins subtly and increases gradually if the target does not yield. Each new level opens new methods. Each of the target's reactions provides new data that fine-tunes the next phase.
Shifting the Narrative
In public discourse, the framing of a person is the decisive factor. It determines how the audience interprets everything the person says and does. The frame functions as an interpretive filter: the same statement looks different depending on which frame it is viewed through.
An AI model can analyse which narrative resonates most effectively with the target's audience at any given moment. It can recommend shifting the narrative if the previous frame loses its effectiveness. The transition from one frame to another happens gradually, so the audience does not consciously notice the change.
The Turtiainen case provides an example of this dynamic. Finnish media framing changed over the years along a recognisable arc:
In the first phase, in 2019, Turtiainen was framed as a "colourful newcomer." His powerlifting background, directness, and Finns Party base made him an interesting figure. Reporting was neutral and curious.
In the second phase, in 2020, the frame shifted to "controversy-prone MP." The travel expense scandal, mocking of police, and ultimately a tweet mocking George Floyd moved reporting into the scandal-news register.
In the third phase, in 2021, the frame deepened to "conspiracy theorist." Comparing COVID vaccines to genocide, calls for stockpiling weapons, and the founding of the VKK party pushed reporting into a register where Turtiainen's statements were interpreted as irrational by default.
In the fourth phase, between 2022 and 2024, the frame shifted to "security threat." After Russia's war of aggression, Turtiainen's pro-Russia stance transformed him from a subject of scandal reporting into a security-political concern. Suspicions of leaking classified information from the Defence Committee deepened this framing.
In the fifth phase, from 2025 onwards, the frame settled as "traitor and laughing stock." After his move to Russia, media framing combined contempt and ridicule. Turtiainen was simultaneously presented as a national security concern and a pitiful figure who "wanders around" a hotel and does not speak Russian. This dual strategy delegitimised him both as a political actor and as a feared threat, making a return to public credibility practically impossible from either direction.
Each phase built on the previous one. The narrative shift was not an abrupt leap but a gradual slide in which each new frame felt like a natural continuation of the last. This is precisely the dynamic that an AI model can produce systematically and optimise in real time.
Exploiting Legal Status
As the narrative changes, the legal framework can change as well. Reclassification of the target can unlock new methods. Altering the security classification expands available authorities. Activating international intelligence cooperation brings in new resources and actors. Applying legal categories to the target can change their position fundamentally.
AI can construct the legal argumentation that justifies the reclassification. It can analyse the target's statements and actions retrospectively and find interpretations that support the new classification. The model does not fabricate evidence. It finds, within existing data, the interpretations that serve the required framework at any given time.
Case Example: The Arc of Neutralisation
Turtiainen's sequence of events forms a recognisable escalation arc: elected to parliament in 2019 with access to classified material as a deputy member of the Defence Committee, expelled from his parliamentary group in 2020 and from the party in 2021. He then lost his Defence Committee seat in 2022 and his parliamentary seat in the 2023 elections. Two years later, he moved to Russia and received refugee status in 2025.
Public sources do not reveal to what extent this arc was organic radicalisation and to what extent it was influenced from outside. The recognisability of the mechanism is, however, a significant observation in itself. The arc follows the same structure regardless of whether influence took place or not: first, isolation from one's own reference group; then, erosion of credibility in the public sphere; followed by removal of institutional standing; and finally, physical relocation away.
The use of AI-assisted profiling and narrative construction in this process would have accelerated each phase and made the framing more personalised, more precise, and harder to detect.
XI. The Legal Vacuum: AI-Assisted Influence and Finnish Law
Finland's intelligence laws — the Act on Military Intelligence (590/2019) and the Act on Civilian Intelligence (582/2019) — contain over 20 explicitly named intelligence methods. These include telecommunications interception, telecommunications monitoring, undercover operations, controlled purchases, technical surveillance, technical observation, copying, covert intelligence gathering, directed use of intelligence sources, and communications intelligence.
Not a single one of these methods was designed to cover algorithmic profiling, personalised media manipulation, or AI-assisted psychological influence.
This does not mean that intelligence authorities are acting contrary to the law. It means that the law does not recognise the methods that AI makes possible. There is a vacuum in the law that cannot be filled by interpretation, because what is at stake is an entirely new type of activity.
From a fundamental rights perspective, the situation is significant. Section 10 of the Constitution protects the right to privacy. Section 12 protects freedom of expression. Section 7 protects personal liberty. Algorithmic profiling, media environment manipulation, and psychological influence target all of these rights in a way that the fundamental rights limitation test has never assessed in this context.
The EU AI regulation (AI Act, 2024) classifies high-risk AI systems and sets requirements for them. However, its scope of application explicitly excludes military use. This means that precisely those use cases where AI's risks to fundamental rights are greatest fall outside EU regulation. National legislation does not fill this gap.
The case law of the European Court of Human Rights provides a partial framework. Big Brother Watch v. United Kingdom (2021) and Centrum för Rättvisa v. Sweden (2021) addressed mass communications surveillance and its relationship to the right to privacy. The Court required that mass communications surveillance must have sufficient safeguards against arbitrariness. AI-assisted profiling and influence go further than mass communications surveillance. Case law does not yet recognise this dimension.
The Ministry of the Interior is preparing amendments to the intelligence laws (SM040:00/2024). It remains to be seen whether the proposal will assess AI-assisted methods at all or whether it will focus on expanding traditional authorities. The Constitutional Law Committee assessed intelligence laws in 2018, in a world where large language models did not exist. In 2026, they are in daily use.
The question of accountability is the deepest dimension of the legal vacuum. Who is responsible for an AI-generated profile that leads to an influence operation? The operator who gave the mission brief? The analyst who interpreted the results? The commander who approved the strategy? The programmer who built the model? Or is no one responsible, because the system is so distributed that no single part sees the whole?
The intelligence law does not answer this question. The Constitution does not answer this question. EU legislation does not answer this question.
Sources: Act on Military Intelligence (590/2019); Act on Civilian Intelligence (582/2019); Constitution of Finland (731/1999); EU AI Act (2024); ECtHR, Big Brother Watch v. UK (2021); ECtHR, Centrum för Rättvisa v. Sweden (2021); SM040:00/2024.
XII. International Comparisons and Cautionary Examples
AI-assisted influence is not a theoretical possibility. It is a documented reality in multiple countries.
NSO Group's Pegasus spyware is perhaps the best-known example. Citizen Lab's investigations have documented how Pegasus was used against journalists, human rights activists, and politicians in over 40 countries. Pegasus enables complete takeover of a phone: messages, calls, camera, microphone, and location data. Its user base consisted of states that directed it against their own citizens.
Saudi Arabia's operation against journalist Jamal Khashoggi in 2018 combined digital surveillance with physical elimination. Khashoggi's social network was mapped digitally before his murder in Istanbul, Turkey. The operation demonstrated how AI-assisted profiling can lead to extreme consequences.
China's Xinjiang Uyghur surveillance system is the world's most comprehensive example of AI-based control of an ethnic group. Facial recognition, phone surveillance, movement tracking, and social contact mapping combine into a system that can predict "deviant" behaviour and lead to detention before anything has happened. The UN Human Rights Office report (2022) documented the situation extensively.
Myanmar's military junta's social media manipulation prior to the 2017 genocide is a cautionary example of how information influence can prepare the ground for violence. Facebook later admitted it had not sufficiently addressed hate speech and disinformation on its platform.
Russia's Internet Research Agency (IRA) demonstrated between 2014 and 2016 how AI-assisted information influence can target another country's elections. The IRA's operations were then largely manual. The development of AI has since multiplied the effectiveness of comparable operations many times over and reduced their costs to a fraction.
In each of these cases, technological capability preceded legislation by years or decades. Pegasus was in use for years before the international debate about it began. China's surveillance system was built without legislative debate. The social media manipulation preceding Myanmar's genocide was not identified in time.
Finland is no exception to this dynamic.
Sources: Citizen Lab, "The Pegasus Project" (2021); UN OHCHR, "Assessment of Human Rights Concerns in the Xinjiang Uyghur Autonomous Region" (2022); Facebook (Meta), Myanmar Human Rights Impact Assessment (2018); Mueller, Robert, "Report on the Investigation into Russian Interference in the 2016 Presidential Election" (2019).
XIII. The Open Question
This article has described structures: how AI transforms intelligence profiling, influence, and the neutralisation of a target. It has done so based on public sources.
Describing structures is not a claim that they are in use — but it is a demonstration that they are available.
We have entered an era in which the state can use AI for the comprehensive profiling of an individual citizen, for psychological influence, and for societal neutralisation. No law names this activity. No authority oversees it. No court evaluates it.
The structure that was unmonitored in "The Invisible Guardian" is now also invisible.
Monitoring an invisible structure is impossible.
The question of accountability remains open, as Chapter XI demonstrated: a distributed system in which no single part sees the whole produces not a single accountable party.
This is not a technology problem. This is a democracy problem.
A question for parliament: Is Finland prepared to address how AI is used in intelligence operations? Who oversees what no one can see?
All information presented in this article is based on public sources: legal texts, government proposals, academic publications, case law, and journalistic sources. This article contains no classified information.
P.S. How many presuppositions can you find in this text, and in which direction did they steer you?