If You Have Nothing To Hide, You Have Nothing To Fear
The Illusion of Harmless Data
The Illusion of Harmless Data
Many celebrate the Apple iPhone 16 as a triumph of consumer tech – a seamless blend of convenience and security. In Apple’s marketing, privacy is a selling point, feeding the comforting narrative that our devices are safe havens for our personal information. Couched in the familiar refrain “if you have nothing to hide, you have nothing to fear,” users shrug off the sprawling data collection built into everyday apps and gadgets. This “nothing to hide” argument is a fallacy – one long exploited by both Big Tech and Big Brother (theguardian.com journalofdemocracy.org). Apple’s own recent actions reveal the cracks in the illusion. As I described in my previous article the company quietly announced a system to scan every iPhone user’s photos for illicit images, a move meant to protect children but one that essentially opened a backdoor to mass surveillance on every device (wired.com). The plan drew fierce backlash from security experts who warned that such a “privacy-preserving” scanner could be repurposed to spy on anything – dissent, political speech, intimate life – under the guise of safety (wired.com). Apple ultimately killed the photo-scanning scheme after public outcry (wired.com), but the episode exposed a stark reality: even the world’s most valuable corporation, famed for its pro-privacy rhetoric, is on the cusp of normalizing constant digital surveillance.
The truth is that data which seems mundane can be weaponized in extraordinary ways. Tech corporations assure us that tracking our clicks, messages, and movements is benign – just fuel for personalized ads or nifty features. Yet as scholar Shoshana Zuboff observes, “if you have nothing to hide then you are nothing.” In other words, a life laid completely bare to surveillance is one stripped of autonomy (theguardian.com). The architects of surveillance capitalism – from Google to Meta to Apple – have created an economy that mines every flicker of our online behavior. This trove of behavioral data doesn’t just stay in a silo. It gets fed into prediction algorithms and sold to the highest bidders, commercial and political (theguardian.com politico.com).
The end goal isn’t simply to know us – it’s to move us. “There’s no one coming to take anyone away to the gulag,” Zuboff says of today’s pervasive monitoring. “It doesn’t want to kill us. It just wants to move us in the direction of its predictions and get the data” (theguardian.com). In the hands of consumer marketers, that means nudging us to buy things. In the hands of political operatives and military psy-ops teams, it means nudging society itself – steering opinions, votes, and social movements with invisible hands. The “harmless” data harvested from our phones and online lives has become anything but harmless. It is the raw material for a burgeoning industry of geopolitical manipulation and social control.
This exposé builds on my previous revelations about Apple and the iPhone 16’s role in surveillance capitalism, but widens the lens to the entire global ecosystem of data exploitation. I will dismantle the myth that widespread data collection is innocuous, showing how it underpins a new breed of influence operations that target our psychology and our democracy. From Silicon Valley’s platforms (Facebook, Google, Amazon, Twitter/X, Microsoft and more) to China’s TikTok, corporate data-harvesting has become a geopolitical weapon – one increasingly wielded to erode civil liberties, polarize societies, and even destabilize nations. Nowhere was this clearer than in the 2016 Cambridge Analytica scandal, which proved in stunning fashion that a handful of data scientists with illicit Facebook data could help swing elections and “put democracy on the ropes” (theguardian.com). That was only the beginning.
I will trace how those tactics evolved from a rogue campaign strategy into a standard playbook for political engineering at scale. By 2025, the landscape is littered with the consequences: democratic decay, algorithmic extremism, and unrest fueled by constant information warfare. Finally, we turn to a domain once confined to battlefields: cognitive warfare. NATO and Western militaries have begun formulating doctrines to fight “wars of the mind,” not only against foreign adversaries but even targeting their own populations in “influence operations” meant to shore up control and consent for potential wars (think China and Iran). Leaked documents and official white papers reveal an unsettling truth – the same data and platforms that enable personalized ads are now enabling precision-guided propaganda in both foreign and domestic arenas. In this new war for hearts and minds, your personal data is the ammunition, and the distinction between commerce, politics, and combat gets blurrier by the day.
Cambridge Analytica – Proof of Concept
When Cambridge Analytica exploded into public view in early 2018, it exposed the dark potential of mass data in politics. This British data analytics firm had quietly siphoned off personal information from up to 87 million Facebook users without consent, using a quiz app to scrape not only those who installed it but their unwitting friends (theguardian.com). Armed with a gargantuan dataset of detailed personal profiles, Cambridge Analytica set out to micro-target voters in the 2016 US presidential campaign and the UK’s Brexit referendum. Its employees touted themselves as masters of a new political science – “psychological warfare” meets Madison Avenue, a fusion of behavioral science and Big Data modeling to “change behavior” at the ballot box (politico.com). As the company’s CEO Alexander Nix brazenly boasted, “We did all the research, all the data, all the analytics, all the targeting” for Donald Trump’s digital campaign (politico.com). Trump’s surprise victory and the Brexit “Leave” win both bore Cambridge Analytica’s fingerprints, and suddenly the world wanted to know: What exactly had they done with our data?
Whistleblower Christopher Wylie, a pink-haired former Cambridge Analytica staffer, pulled back the curtain. He revealed that the firm had developed “psychographic” profiles on tens of millions of Americans, categorizing personalities and political leanings to pinpoint vulnerabilities (theguardian.com). The Trump campaign, through Cambridge Analytica, could target an individual voter with tailored messages designed to exploit their fears, biases, and desires – a personalized propaganda feed invisible to anyone else. In Wylie’s words, it was like “a weapon” aimed at democracy’s heart (theguardian.com). The goal was not persuasion through reasoned debate, but manipulation through data-driven insight, bypassing rational resistance. As Carole Cadwalladr – the investigative journalist who broke the story – later summarized, Cambridge Analytica was proof that “every aspect” of our online life could be “mimicked” and repurposed for political mind games (theguardian.com). What Silicon Valley had pioneered for targeted advertising, this firm simply redirected toward elections. It was, in effect, a private deployment of techniques long used in military psy-ops, now unleashed on civilian populations by campaign consultants.
Indeed, Cambridge Analytica’s pedigree underscored this military connection. The company was spun out of a contractor called SCL Group (Strategic Communication Laboratories), which for years had sold “influence operations” to militaries and governments around the world (politico.com). SCL had billed itself at arms industry trade shows as “the first private company to provide psychological warfare services” to the military (politico.com). In the mid-2000s, SCL promised the British Ministry of Defence that mind-bending propaganda could shorten wars (politico.com). It ran information campaigns against the Taliban and consulted for NATO operations.
By the 2010s, this same company (through its Cambridge Analytica offshoot) turned to Western elections, treating American and British voters as just another “target audience” to be manipulated. “We use the same techniques as Aristotle and Hitler,” SCL’s founder Nigel Oakes once unabashedly said, “We appeal to people on an emotional level to get them to agree on a functional level”(politico.com). The reference to Hitler’s propaganda tactics was chilling – and telling. The architects of this strategy understood that emotional manipulation could override facts and reason.
In 2016, that theory was tested at scale. Cambridge Analytica flooded Facebook with tens of thousands of individualized ads, exploiting anxieties about immigration in the UK and crime in the US, pushing polarizing slogans (“Take Back Control”; “Build the Wall”) into the feeds of those most susceptible. Voters were nudged, often subconsciously, in directions they might not have gone on their own.
The full impact of Cambridge Analytica’s meddling may never be quantifiable, but its proof of concept was undeniable. A formerly obscure firm, bankrolled by ideological billionaires and fueled by illicit data, had helped topple established political expectations in the world’s leading democracies (theguardian.com). In the aftermath, Facebook’s CEO Mark Zuckerberg was forced to apologize to Congress (theguardian.com). Cambridge Analytica shut down in disgrace. Yet, as one researcher noted, the scandal was just “the tip of the iceberg” (politico.com). The real story was the emergence of a global influence industry, an “empire of psy-ops” stretching far beyond one election, one company, or one country (politico.com).
Cambridge Analytica, with its cache of Facebook profiles, had merely pulled back the veil on how any actor with enough data and savvy could warp the public’s perception – and how willing major tech platforms were to turn a blind eye until caught. As Zuboff sharply observed, Cambridge Analytica’s operations were nothing more than “a day in the life of a surveillance capitalist” (theguardian.com). The only difference was that this time, the behavioural modification was for political ends (votes) instead of commercial ones (purchases). It was a wake-up call that democracy itself could be quietly undermined by the very tools we use to update our status and share selfies.
Global Data Economy – Political Targeting at Scale
If Cambridge Analytica was the canary in the coal mine, the years since have shown that the entire mine is toxic. The global data economy that powers social media and online advertising has become a playground for political operatives, authoritarian regimes, and private mercenaries for hire. In 2016, we learned that a single Facebook app could harvest tens of millions of personal profiles with ease (theguardian.com). By 2025, such data is even more abundant – and available to anyone with a credit card or state sponsorship. Tech giants like Facebook (Meta) and Google built their empires by offering “free” services in exchange for surveillance of users’ behavior. Every like, click, search, and location ping is tracked and fed into algorithms that can predict and influence what we do next (theguardian.com journalofdemocracy.org). This surveillance capitalism machinery, originally designed to maximize ad revenue, is now routinely repurposed for political profiling and targeting on a massive scale.
The microtargeting techniques Cambridge Analytica pioneered have become mainstream in election campaigns worldwide. Political parties and consulting firms across Europe, Asia, Africa, and the Americas scooped up voter data – from social media, data brokers, and breaches – to build their own psychographic models. In the 2020 US presidential race, both major campaigns employed armies of data analysts to slice and dice the electorate into niche categories susceptible to tailored messages. In India’s 2019 elections, operatives weaponized WhatsApp and Facebook to spread carefully targeted misinformation to specific castes and communities.
In Brazil, social networks were inundated with propaganda ahead of votes, often sourced from shadowy marketing firms. What was once a boutique experiment has become a standard part of the political toolbox: harvest data, identify fault-lines, and exploit them with personalized content. Lawmakers have struggled to keep up. “This market is essentially the Wild West,” U.S. Senator Mark Warner warned after the Cambridge Analytica affair, noting how “extensive micro-targeting based on ill-gotten user data” thrives in an unregulated environment prone to deception (theguardian.com). Years later, that Wild West is only more entrenched.
Crucially, it’s not only democratic campaigns that leverage these tools – authoritarian governments and militaries have eagerly embraced them as well. In the Middle East, regimes have hired Western PR firms and data consultants to manage domestic opinion and track dissidents online. In 2018, it emerged that a company linked to Cambridge Analytica had pitched its services in countries from Kenya to Malaysia, offering to swing elections by digital means. And in an ironic twist, even as Western nations reeled from interference, some of their own militaries were quietly running parallel influence operations abroad. The Pentagon, for instance, has contracted multiple firms (including, reportedly, SCL Group) to conduct “online persona” operations aimed at influencing populations in the Middle East and beyond (politico.com).
Leaked documents and undercover investigations have revealed a thriving private market for disinformation-as-a-service. Earlier this year, an international journalist consortium exposed “Team Jorge,” a group of Israeli contractors led by a former special forces operative, which claims to have meddled in 30 elections globally over the past two decades (theguardian.com). Team Jorge offered clients a menu of dark arts – hacking of political opponents, and a vast army of some 30,000 fake social media profiles complete with AI-generated avatars – all for the purpose of covertly manipulating public opinion in any country, for the right price (theguardian.com theguardian.com).
Their signature software, ominously called “Aims,” allows one operative to control thousands of fake accounts across Facebook, Twitter, Instagram and even Amazon, to create the illusion of consensus or dissent (theguardian.com theguardian.com). They boasted of clients ranging from African incumbents seeking to discredit rivals to corporate players wanting to smear regulators (theguardian.com). In other words, influence itself has been commoditized. As one Team Jorge agent put it on hidden camera, “Months of work, 33 presidential-level campaigns…Leave no trace.” These revelations underline how the behavioral data economy built by social media now enables a cottage industry of global propaganda mercenaries.
Meanwhile, the major U.S.-based platforms continue to be central conduits for political influence operations – sometimes unwittingly, sometimes with tacit acknowledgment. Facebook, after 2016, placed some restrictions on how political advertisers can target users (limiting certain sensitive categories). Yet its core business model remains profiling users for microtargeting, which any campaign or propagandist can exploit with enough creativity. Twitter (rebranded as X), under new ownership, has loosened many of its rules, potentially opening the floodgates for coordinated disinformation. Google still allows targeted political ads on its properties (with some constraints in the EU), meaning data from your Search history or YouTube viewing can determine what campaign message you see. Even TikTok, the wildly popular short-video app often labeled a national security threat due to its Chinese ownership, became a venue for influence – with partisan messaging and state-sponsored content slyly disseminated through viral videos.
Governments have grown wise to TikTok’s power: in 2023, the White House fretted that Beijing could use the app’s algorithm to subtly promote divisive content to millions of Americans. The flipside is Western governments themselves exploring how to harness TikTok trends for public messaging. Across the board, the line between authentic public sentiment and manufactured influence has blurred. We now live in what one analyst calls “the era of manipulation,” where digital platforms that once promised openness have instead become vectors for covert persuasion and social engineering (journalofdemocracy.org journalofdemocracy.org). The algorithms underpinning our information feeds prioritize whatever keeps us engaged – typically, content that is extreme, emotionally charged, and polarizing (journalofdemocracy.org).
This creates fertile ground for malicious actors to inject lies or amplify fears. As the Omidyar Network reported, social-media algorithms “engineer viral sharing in the interest of [platforms’] business models,” meaning outrage and misinformation often get algorithmically boosted (journalofdemocracy.org). In such an environment, distortion becomes a feature, not a bug (journalofdemocracy.org). Foreign agents – from Russian troll farms to ISIS recruiters – learned to ride these algorithmic currents, insinuating themselves into our online discussions with troubling ease (journalofdemocracy.org).
Perhaps the starkest lesson of recent years is that political actors, militaries, and corporations are not operating in separate silos – they are colluding within a unified behavioral data economy. Cambridge Analytica’s parent SCL exemplified this nexus: it had U.S. State Department contracts to counter terrorist propaganda overseas even as its offshoot was harvesting Facebook data to influence U.S. voters (politico.com politico.com). Its staff moved between military psy-ops and election campaigning with ease (politico.com politico.com). Likewise, social media companies now routinely partner with government agencies. In one direction, they hand over data: Edward Snowden’s disclosures showed that tech giants like Google, Facebook, Apple and Microsoft were compelled to give the U.S. National Security Agency direct access to user data via the PRISM program (theguardian.com).
In the other direction, they receive requests (and sometimes pressure) from authorities to tweak algorithms or police content in the name of public interest. During the COVID-19 pandemic, for instance, Western governments worked with Facebook and Twitter to elevate certain information and suppress “harmful” narratives, a collaboration aimed at public health but walking a fine line between education and information control. Such entanglements raise uncomfortable questions: at what point do private platforms become unofficial arms of the state’s influence apparatus? And conversely, when intelligence and military agencies use the same tools of influence that marketers do, do they cease to be accountable to democratic oversight?
2025 Landscape – Polarization, Unrest, and Algorithmic Governance
Today, in mid-2025, societies around the world are living with the aftershocks of a decade of data-driven manipulation. The symptoms are visible in every scroll through a newsfeed: fever-pitch polarization, proliferating conspiracy theories, and a public sphere splintered into algorithmically defined echo chambers. The promises of the early internet – more connection, more knowledge, more democracy – have curdled into what one scholar describes as “a dramatically different set of circumstances, living in an era shaped not by openness, but rather by manipulation”(journalofdemocracy.org). The manipulation is both bottom-up and top-down. Bottom-up, because ordinary users have learned to hack the attention economy, spreading sensationalism and misinformation that algorithms reward. Top-down, because state-aligned and corporate actors seed and amplify content to achieve strategic aims.
The result is a volatile feedback loop: algorithms favor extreme content that entrenches divisions; those divisions are then exploited by propagandists to push societies toward breaking points. Political discourse has become a cacophony of manufactured outrage, drowning out reasoned debate. As Facebook whistleblower Frances Haugen testified, “The version of Facebook that exists today is tearing our societies apart and causing ethnic violence around the world” (theguardian.com). Indeed, from Myanmar to Ethiopia, social media-fueled hate campaigns have translated into real-world atrocities, showing the deadly stakes of this new media order.
In Western democracies, the polarization is less bloody but deeply corrosive. Citizens increasingly inhabit parallel realities tailored by their news feeds – with liberals and conservatives, the secular and the radicalized, each receiving wholly different versions of truth. Engagement-hungry algorithms, like those at YouTube and Facebook, have been documented steering users toward ever more extreme content to keep their attention. A young man watching innocuous political commentary can be led, step by step, into a rabbit hole of far-right propaganda or jihadist extremism as the recommendation engine seeks the next clickbait video. By the time human moderators or regulators react (if they do at all), the algorithm has already done its work, possibly creating a new convert to an extremist cause.
Meanwhile, mainstream politics finds itself hijacked by the passions of online tribes. Election seasons are now beset by viral lies that spread faster than campaigns can refute them. Minor fringe narratives – a baseless rumor of voter fraud here, a mischaracterized policy there – can be artificially amplified to millions via bot networks and enthusiastic partisans, forcing them into the center of public conversation. This dynamic was starkly visible in the United States: the false narrative that the 2020 election was “stolen” took root on social media and culminated in the January 6, 2021 Capitol riot, an unprecedented attack on the democratic process egged on by online misinformation.
Other countries have seen similar turbulence: consider France’s Yellow Vest protests, ignited and coordinated via Facebook; or Germany’s recent thwarted coup plot by a far-right group marinated in QAnon conspiracy forums. Data-driven polarization doesn’t just create angry words online – it is increasingly spilling onto the streets in the form of protests, clashes, and crises of governance.
Underlying all this is a subtle shift toward what can be termed algorithmic governance. De facto, tech companies now exercise enormous power over civic life – power that in earlier eras belonged to editors, regulators, or elected officials. Facebook’s decision to tweak its News Feed algorithm in 2018, for example, dramatically altered which news stories hundreds of millions of people saw. Internal Facebook papers (the “Facebook Files”) later revealed that the company knew this change amplified divisiveness – users were seeing more incendiary content that kept them hooked, even as it “rewarded outrage,” making political dialogue more vitriolic (fortune.com journalofdemocracy.org). Yet Facebook largely ignored the warnings because the engagement numbers looked great (theguardian.com theguardian.com). In effect, a handful of Silicon Valley engineers had tuned the dials of societal emotion for profit, and democracy paid the price.
This is algorithmic governance: when our collective life is steered by opaque code optimized for profit or power rather than the public good. We see it in Twitter’s trending topics, in YouTube’s autoplay queue, in TikTok’s uncanny ability to show each teen exactly what will obsess them. These systems “decide” what is visible or buried in our feeds, manufacturing consent (or dissent) not by overt propaganda from the state, but by the automated curation of information we consume daily. It is a 21st-century version of what media scholar Noam Chomsky once called “manufacturing consent” – except now each citizen’s consent is manufactured individually, tailored by AI, rather than through one-size mass messaging.
Governments, recognizing this power, have tried various responses. The European Union’s regulators push Big Tech toward more transparency and accountability (through laws like the Digital Services Act), hoping to mitigate the harms of disinformation and polarization. But authoritarian regimes have taken a darker inspiration: they see in Silicon Valley’s systems a blueprint for total social control. In China, the state’s notorious Social Credit System is an attempt to integrate data from every facet of a citizen’s life – financial, social, digital – to reward or punish behavior. While far from fully realized, it represents the endgame of surveillance and data-driven governance: the ability to algorithmically enforce “good” conduct by dangling privileges or imposing penalties, all governed by inscrutable code.
Western democracies recoil at such overt social engineering, yet in milder forms they are drifting in that very direction. Policing agencies increasingly use predictive algorithms (fueled by big data) to determine where to deploy officers or whom to flag as a potential threat – raising the specter of data-driven profiling and loss of due process. During the pandemic, some democratic governments toyed with using smartphone data to monitor quarantine compliance or to nudge citizens toward vaccination. Few went as far as they could have, but the technology was ready and waiting.
The year 2025 finds us at a crossroads. On one hand, the public is more aware than ever of the threats of unchecked data harvesting and manipulative algorithms – the Cambridge Analytica scandal, successive whistleblowers like Haugen, and investigative journalists have seen to that. On the other hand, meaningful checks on these practices remain elusive. We have, in many ways, normalized the abnormal. Mass surveillance by corporations is accepted as the price of digital convenience. Pervasive microtargeted propaganda is dismissed as “campaigning as usual.” Even as trust in social media craters, billions still rely on it as their primary news source. In the absence of robust guardrails, the forces of polarization and manipulation continue to deepen. Democratic societies risk a kind of cognitive paralysis – unable to even agree on basic facts, let alone enact collective decisions – while authoritarian actors exploit the chaos. As one expert starkly put it, social media’s surveillance-fed algorithms have proven “compatible in certain ways with authoritarianism”, because they concentrate control over information flow in a few private hands and encourage users to acquiesce to constant surveillance in exchange for dopamine hits of content (journalofdemocracy.org). The same systems that can undermine a democracy can just as readily be used to tighten an autocrat’s grip by shaping public opinion to his will.
NATO’s Cognitive Warfare Doctrine – Turning Psy-Ops Inward
The recognition that the human mind is now a battlefield is not lost on the world’s military strategists. In fact, NATO – the Western military alliance – has officially begun grappling with the concept of “cognitive warfare.” In NATO’s own words, “in cognitive warfare, the human mind becomes the battlefield. The aim is to change not only what people think, but how they think and act” (nato.int). Unlike traditional combat, cognitive warfare targets beliefs and perceptions on a societal scale. A NATO-sponsored paper describes it bluntly: an adversary could “fracture and fragment an entire society” without firing a shot, by sowing doubt, conspiracy and division until that society “no longer has the collective will to resist” an aggressor (nato.int). These aren’t idle musings – they are drawn from real events. Russian information operations, from the annexation of Crimea to interference in Western elections, have provided NATO planners with concrete case studies of cognitive warfare in action. Likewise, the rise of ISIS showed how a potent online propaganda campaign could draw recruits from peaceful societies into a spiral of violence. NATO’s conclusion has been stark: the threat landscape now extends to the minds of the populace, and thus the Alliance must develop tactics to fight back in this new domain.
NATO’s Allied Command Transformation (ACT) has been actively developing a doctrine for cognitive warfare over the past few years (act.nato.int). In official releases, NATO acknowledges that “whole-of-society manipulation has become a new norm” and that adversaries exploit social media and disinformation to “affect attitudes and behaviours” across entire populations. A NATO Review article notes how such campaigns aim to “introduce conflicting narratives, polarise opinion, radicalise groups, and motivate them to acts that can disrupt or fragment an otherwise cohesive society” (nato.int). This language, notably, could just as easily describe what has been happening inside NATO member states due to domestic and foreign agitators. NATO traditionally concerns itself with external threats, but cognitive warfare blurs the line between foreign and domestic. If an enemy (say, Russia) is spreading disinformation to weaken an alliance country, that campaign targets the local population – so countering it means engaging your own citizens’ minds. The uncomfortable implication is that militaries and security services may start to view their own publics as terrain to be monitored and influenced, ostensibly for defensive purposes.
Indeed, evidence has emerged that some NATO governments are already turning psy-ops inward. In the UK, a scandal broke when a whistleblower from the British Army’s 77th Brigade – a unit specializing in information warfare – revealed it had been deployed during COVID-19 to monitor and counter “disinformation” among British citizens on social media (bigbrotherwatch.org.uk). For years, officials insisted the 77th Brigade never targeted domestic populations. But in early 2023, the UK Defence Secretary admitted in Parliament that the unit had used social media data to “assess UK disinformation trends”, effectively scanning the posts of ordinary Brits who voiced skepticism about lockdowns or vaccines (bigbrotherwatch.org.uk). Internal documents suggested the army analysts flagged messages from the public and even journalists that were deemed to undermine the official pandemic response. This revelation caused an uproar: the military engaging in domestic surveillance and influence activity runs counter to democratic norms. Yet it was rationalized as protecting the nation from harmful propaganda (in this case, dangerous health misinformation). The lines between national security and public opinion management had begun to blur.
A similar episode unfolded in Canada. In 2020, during the pandemic’s first wave, a team in the Canadian Forces’ Joint Operations Command drew up plans for a “domestic intelligence operation” as part of what they named Operation Laser (vice.com). A leaked report showed that military planners viewed the pandemic as creating conditions of fear and social disruption that could be exploited by malicious actors – or lead to civil unrest. To preempt this, they proposed a propaganda campaign targeting Canadian citizens to “reinforce messaging” from the government and head off possible disobedience to public health measures (vice.com). The document spoke of “shaping” and “exploiting” information and coordinating influence activities to maintain order. It acknowledged that lockdowns and loss of freedom were breeding anxiety, and suggested the military had to “mitigate” the public’s feeling of disempowerment by sculpting the information environment. When this plan came to light (thanks to an Ottawa Citizen investigation), officials quickly shelved it, and it became a talking point for conspiracy theorists. Military leaders admitted it was a misstep, stating that “problematic information operations activities” had caused reputational damage. But the fact remains: a NATO-country military contemplated using advanced influence tactics on its own people – something normally associated with counter-insurgency operations in war zones.
While these internal influence ops are usually framed as defensive – inoculating the public against foreign propaganda or dangerous rumors – they set a precedent. NATO’s cognitive warfare doctrine blurs external and internal targets. A Canadian or British general might argue that ignoring the cognitive domain at home leaves society vulnerable to enemy exploitation. But where is the line between protecting the public’s minds and manipulating them? The NATO Innovation Hub, which has studied cognitive warfare, warned that “there is no ownership of the cognitive domain” – everyone, soldiers and civilians alike, can be targets, and distinguishing influence from manipulation is difficult.
The doctrine speaks of building “resilience” in allied populations – essentially, preparing citizens to resist hostile information (nato.int). Yet building resilience can shade into actively shaping narratives that the authorities deem favorable. This raises profound civil liberty concerns. The specter of state propaganda directed at domestic audiences – using Big Tech data and psy-op techniques – is no longer theoretical. One former NATO official candidly noted that cognitive warfare tactics involve the “systematic exploitation of weaknesses” in human psychology (innovationhub-act.org). In a defensive posture, that could mean identifying segments of your population prone to enemy disinformation and then surreptitiously pushing counter-narratives to them. It’s not hard to see how that could morph into pushing any narrative the government wants to inculcate, effectively weaponizing social media data to keep the public in line.
Moreover, NATO’s new focus dovetails with the vast Five Eyes surveillance infrastructure that the U.S. and key allies operate. The Five Eyes (the intelligence alliance of the US, UK, Canada, Australia, New Zealand) have for decades shared signals intelligence, intercepting communications worldwide. Snowden’s leaks showed programs where NSA and GCHQ tapped directly into tech companies’ data streams and fiber-optic cables (theguardian.com). While officially aimed at foreign threats, this apparatus vacuumed up untold amounts of data about ordinary citizens (often their owncitizens). Intelligence agencies rationalized it with the same “nothing to hide” logic and counterterror need. Now, imagine fusing that bulk surveillance capability with cognitive warfare intentions.
The result is chilling: real-time insights into public sentiment and individual beliefs, combined with means to inject influence on the same platforms monitored. We got a glimpse of this synergy in 2022, when Twitter and Facebook announced they had shut down a network of fake accounts that analysis traced back to the US military (washingtonpost.com). Researchers from Stanford and Graphika found over 150 bogus personas on those platforms pushing pro-Western narratives – some praising US actions or criticizing Russia and Iran (washingtonpost.com). The Pentagon ordered a review of its online psy-op programs after the exposure, implicitly acknowledging U.S. military units were conducting influence campaigns even on platforms that Americans use (washingtonpost.com). Ostensibly, those operations targeted foreign audiences (for example, in Central Asia or the Middle East). But social media is a porous domain – messages spill over borders, and Americans at home can easily see propaganda intended for Iran or vice versa. In effect, the information battlefield is global and borderless, and that means a NATO country’s own population can become collateral targets of its influence operations.
The NATO “cognitive security” discussions also highlight how corporate behavioral data is a treasure trove for militaries. NATO strategists note that adversaries can manipulate public opinion using precisely the tools of Facebook and Twitter (act.nato.int). So, to fight back, NATO would likely need access to similar data and tools. It’s not far-fetched that alliances could partner with social media firms or data brokers to obtain granular insights on segments of their citizenry – which demographics are vulnerable to what misinformation, who is drifting into extremist ideologies online, etc. Already, companies like Palantir (co-founded by tech mogul Peter Thiel) contract with Western militaries and intelligence, providing big-data analytics that can crunch everything from social media chatter to cellphone location trails. During the Afghanistan and Iraq wars, U.S. and NATO forces used programs to analyze local sentiment by scraping Facebook and Twitter posts in conflict zones. Now, those capabilities are being pointed inward.
For instance, Germany’s domestic intelligence have used social media analysis to identify far-right radicalization patterns. France’s military announced it was creating an “information warfare” command to counter foreign influence but also to “maintain the operational moral strength” of the French public – a phrase suggesting boosting narratives favorable to state policy. These developments all ride on one assumption: that access to vast pools of personal data and the ability to algorithmically target messages is a strategic advantage in modern conflict. The West watched Russia weaponize social media in 2016 and concluded, belatedly, that it must not be outgunned on this new front. Thus we see NATO documents calling for “synchronization of adversarial effects against emotional and subconscious domains” and urging member states to develop “offensive and defensive uses” of big data and cognitive techniques (innovationhub-act.org act.nato.int). In plainer terms, that means refining the art of mass persuasion (or disruption) through data-driven means, and shielding our own societies from the same.
The danger, of course, is that one does not defend a democracy by adopting the tactics of authoritarian mind control. If cognitive warfare doctrine turns inward without strict safeguards, NATO risks undermining the very democratic values it professes to defend. Influence operations directed at one’s own population cross an ethical line – eroding the trust between citizens and state, and treating the public as just another adversary to be “targeted.” Cognitive warfare may be a reality we have to grapple with, but a balance must be struck between resilience and repression. Otherwise, we face a future where “cognitive security” becomes a euphemism for domestic propaganda, and open societies suffer a death by a thousand cuts – their civil liberties chipped away under the relentless pressures of the surveillance/influence state.
Conclusion – The Erosion of Civil and Cognitive Sovereignty
Over the past decade, we have witnessed the convergence of surveillance capitalism with the machinery of political and military power. What began as a means to sell us products has evolved into a means to reshape our politics and even our identity as free-thinking individuals. The fundamental promise of liberal democracy – that citizens can make informed choices in a free marketplace of ideas – is being subverted by a different model, one where choices are engineered by those with access to our data and the tools to act on it. Civil sovereignty, our right to privacy and self-determination, has been steadily eroded by the corporate harvesting of personal data on one hand and the state’s hunger for control on the other. At the same time, cognitive sovereignty, the sanctity of our own mind and opinions, is under assault by sophisticated techniques of manipulation that most people can neither see nor comprehend in real time.
The Apple iPhone 16 in your pocket may look and feel like a personal device, but as we’ve uncovered, it is also a node in a vast network where your “private” life is continuously extracted, analyzed, and monetized – or weaponized. The “nothing to hide” fallacy has lulled many into complacency even as their every digital move is tracked. We must firmly reject that fallacy. Privacy isn’t about hiding wrongs; it’s about preserving the freedom to think and explore without constant observation. When that freedom erodes, so does the freedom to dissent or to simply be left alone. In a world where data is power, mass surveillance of the innocent is not a neutral stance – it is an invitation for abuse. We have seen how data can be turned into targeted “nudges” that push populations toward fear, hatred, or acquiescence. We have seen it wielded by profiteers to destabilize elections, by foreign autocrats to weaken rivals, and now by our own security institutions in the name of resilience.
What is at stake is democratic society’s control over its own destiny. If we allow surveillance capitalism to continue unfettered, we effectively outsource our cognitive security to private interests that do not answer to the public. And if we allow governments and militaries to piggyback on that surveillance infrastructure to conduct psychological operations – even with good intentions – we risk entering a slippery slope toward a surveillance state. There was a time when terms like “Big Brother” (from Orwell’s 1984) were used as warnings about an omnipresent government eye. Today, Big Brother wears a friendly disguise: a smartphone, a social network, an AI assistant. The spying isn’t always done by men in trench coats; it’s done by the apps we voluntarily use, and by the web of data brokers and intelligence alliances operating in the shadows. And the propaganda isn’t delivered via crackling loudspeakers; it’s served in our personalized feeds, sandwiched between cat videos and friends’ vacation photos, tailored just for us.
Reversing this dangerous trend will not be easy. It demands action on multiple fronts: robust data protection laws that genuinely curtail indiscriminate collection; strict oversight separating corporate data from state surveillance (no more backdoors and secret bulk feeds as exposed by Snowden); transparency in how algorithms curate our information; and perhaps most challenging, public education to restore some form of “cognitive immunity” in citizens. People must understand that being micro-targeted with content means someone engineered that message for them, and ask why. We need a renaissance of critical thinking and digital literacy to recognize manipulation attempts – essentially, inoculating the collective mind against the pathogen of disinformation. On a policy level, democracies might consider treaties or norms that ban the most egregious forms of peacetime cognitive warfare, much as we ban biological weapons. But such efforts will meet resistance as long as the tools are so effective and tempting to use.
Ultimately, the fight is for nothing less than what one might call cognitive sovereignty – the right of individuals and peoples to govern their own minds. As it stands, that sovereignty is being chipped away by a collusion of data-mining corporations and power-hungry actors who treat human thoughts and feelings as just another domain to be conquered. The erosion is gradual, almost imperceptible day to day, but it is accelerating. If we do not course-correct, we may awake in a few years to find that our opinions are no longer truly our own – they will be the echoes of whatever some algorithm decided we should believe, based on profiles built from our most intimate data. We will find that dissent has withered, not because it was outlawed, but because it can be quietly programmed out of the populace by manipulating the information flows we see. Dystopia, in this scenario, will not announce itself with the jackboot; it will creep in under the guise of personalized content and “strategic communications.”
The exposé I have laid out – from Apple’s privacy contradictions to Cambridge Analytica’s election hijinks, from the global disinformation mercenaries to NATO’s inward-turning psy-ops – reveals a common thread. In each case, mass data collection enabled a profound power imbalance: a handful of entities gained the ability to know and influence the many with unprecedented precision. This is a reversal of democratic ideals. It is time to reassert those ideals – to demand that technology serve the people, not subjugate them. Surveillance capitalism’s excesses must be curbed by law and by consumer pressure. Political use of personal data must be brought under democratic oversight, with harsh penalties for covert manipulation. And military or intelligence agencies venturing into the cognitive realm must be checked by ethical constraints and transparency to prevent the normalization of domestic psy-ops.
In battling the “nothing to hide” myth, I affirm that privacy is not about secrecy, but about agency. In battling surveillance-driven manipulation, I affirm that democracy is not a data science experiment, but a pact of trust and reason among citizens. The road ahead is arduous, but the alternative is acquiescence to a new form of soft tyranny – one where we live in a world of total visibility and ceaseless influence, falsely comforted by convenience while our capacity for independent thought is quietly eroded. The time to act is now, while we still recognize what’s happening. Our minds should remain our own, and our democracy – messy, noisy, and unpredictable – should remain in the hands of the people, not hijacked by those who treat we the people as programmable subjects. The alarm bells have been rung; it’s up to us to reclaim our cognitive sovereignty before it’s too late.
Sources:
Zuboff, Shoshana. The Age of Surveillance Capitalism. Interview in The Guardian (2019) theguardian.com.
Cadwalladr, Carole. Cambridge Analytica and Facebook scandal. The Guardian/Observer (2018) theguardian.com.
Politico Investigations. Cambridge Analytica’s psyops pedigree. Politico (2018) politico.com.
Pasternack, Alex. Before Trump, Cambridge Analytica built psy-ops for militaries. Fast Company(2019) fastcompany.com.
Walker, Christopher. The Era of Manipulation. Journal of Democracy (April 2019) journalofdemocracy.org.
Haugen, Frances. Testimony on Facebook’s harms. Quoted in The Guardian (2021) theguardian.com.
Warner, Mark. Statement on Cambridge Analytica. Quoted in The Guardian (2018)theguardian.com.
Big Brother Watch (UK). 77th Brigade monitoring of UK citizens. Press release (Feb 2023) bigbrotherwatch.org.uk.
Lamoureux, Mack. Canadian military’s planned COVID psy-op. VICE News (2021) vice.com.
NATO Review. Countering Cognitive Warfare. Johns Hopkins/Imperial College report (May 2021) nato.int.
NATO ACT. Cognitive Warfare Concept. Official website (2022) act.nato.int.
Nakashima, Ellen. Pentagon orders review of online psy-ops. Washington Post (Sept 2022) washingtonpost.com.
Kirchgaessner et al. ‘Team Jorge’ disinformation for hire. The Guardian (Feb 2023) theguardian.com.
Newman, Lily Hay. Apple’s CSAM scanning controversy. Wired (Dec 2022) wired.comwired.com.



It's not so much whether I have something to hide or not, it's the principle of the integrity of the private sphere.
Has Russia successfully interfered in Western elections?