Silent Shaping of Reality
Imagine, one morning you scroll through your social media feed and notice something odd – the voices you used to see have gone quiet. That fiery independent commentator? Missing. The off-narrative news clipping you shared? Hardly any likes. It feels as if an unseen hand is trimming your information diet. This is the eerie realm of shadow banning, where social media algorithms silently suppress or throttle content without ever telling you. Companies prefer sterile terms like “algorithmic suppression” or “de-amplification,” but the effect is the same: what you see (or don’t see) online is being engineered in secret washingtonpost.com. As The Washington Post observed, these platforms operate as “black boxes”, tightly guarding how they decide what content to hide washingtonpost.com. What began as a bid to filter spam and hate has morphed into a powerful system of invisible censorship shaping our digital reality.
Viewer comment from my Youtube channel (SaltCubeAnalytics)
Crucially, you won’t get an alert when it happens. Your posts remain visible to you, giving the illusion nothing is wrong. But behind the scenes, the platform’s algorithms may have quietly downranked your content so fewer people see it, or removed it from search results and discovery feeds altogether insights.som.yale.edu. This is censorship’s new frontier – not an outright ban (which would at least be obvious), but a stealth confinement of information. And it’s not just happening to outspoken creators or fringe channels. Ordinary users are affected too. In one survey, nearly 1 in 10 Americans on social media suspected they’d been shadow banned. People from all walks of life – from sex educators to activists and journalists – have felt the “looming threat of being invisible,” doing what they always did online only to find their reach mysteriously muzzled washingtonpost.com.
This piece is an investigative deep-dive into how algorithmic censorship – especially on major platforms like YouTube and X (formerly Twitter) – distorts our online environments without our knowledge. It’s a story of how your feed can be narrowed and sanitized to manufacture a false sense of consensus, how your perceptions are subtly steered, and how this ties into a wider system of information control orchestrated in part by Western governments. In the wake of the Ukraine war and clashes with Iran, the U.S. and EU have aggressively leaned on Big Tech to police narratives, even as, or better because, those same tech giants profit from military contracts. The result is a feedback loop of profit and propaganda: tech companies quietly tweaking what we see to align with “approved” viewpoints, under the guise of fighting “disinformation” or “extremism.” All of this is sold to the public as safety and security, but as we’ll see, it raises uncomfortable echoes of the very authoritarian tactics the West claims to oppose.
What Exactly Is “Shadow Banning?”
Shadow banning is essentially censorship by algorithm, done in a way that the target isn’t supposed to realize. Traditionally, if you broke a platform’s rules, you might get a notification of removal or a ban. With shadow banning, nothing so explicit happens – instead, the platform quietly reduces your visibility. Your post remains on your profile, but it “would appear less, or not at all, in the timelines of other users,” as one study explains insights.som.yale.edu. In the past Twitter’s internal teams used the euphemism “visibility filtering” for a host of tools that achieve this: blocking a tweet from trending, excluding it from search results, or limiting which users see it in their feeds en.wikipedia.org. Today, it’s the AI equivalent of the platform muzzling your microphone – you can speak, but it ensures only a few people in an empty room hear you.
Viewer comment from my Youtube channel (SaltCubeAnalytics)
Platforms have long denied “shadow banning” exists, often quibbling over semantics. For instance, Instagram’s CEO once claimed shadow banning was “not a thing,” though he was referring to the narrow definition of completely muting someone’s posts to everyone except themselves washingtonpost.com en.wikipedia.org. The reality is that all major social networks now use some version of this technique. As a Washington Post tech columnist put it, “shadowbanning is real…companies now employ moderation techniques that limit people’s megaphones without telling them”, often targeting what they call “borderline” contentwashingtonpost.com. Facebook and YouTube don’t like the term “shadow ban,” but they openly acknowledge de-amplifying posts that toe the line of policy violations washingtonpost.com(1) washingtonpost.com(2). Meta (Facebook/Instagram) even introduced an Account Statusfeature in late 2022 for professional users, which notifies them if their content is marked “not eligible” for recommendations washingtonpost.com. In other words, Instagram quietly began telling some users that yes, we’re hiding your posts from the wider feed – a tacit admission that shadow banning (by another name) is part of the moderation toolbox.
How do they do it? The tactics have jargon-y names but are relatively easy to grasp:
Impression Throttling: The platform’s algorithm limits how many times or to whom your post is shown. Instead of reaching 10,000 people, maybe only 500 see it in their feeds due to deliberate down-ranking. This drastically shrinks the “impressions” (views) your content gets, effectively strangling its reach.
Delisting or De-ranking: Your content is removed from discovery surfaces. On X/Twitter, this meant being hidden from search suggestions or the trending page. On YouTube, it could mean your video no longer appears in recommended videos or fails to show up in search results for relevant keywords. A leaked Twitter document from the “Twitter Files” revealed tags like “Search Blacklist” and “Do Not Amplify” applied to certain accounts – ensuring those users’ posts wouldn’t be easily found or amplified on the platform en.wikipedia.org.
Invisible Strikes: Many platforms operate internal “strike” systems for rule violations – you get a strike for posting something against the rules. But often these strikes are invisible to the user. Accumulate a couple, and your account might be secretly flagged as lower reputation. YouTube, for example, has been reported to give creators “quality” scores based on their content’s alignment with guidelines; get on the wrong side of that score and your videos might quietly stop showing up in others’ feeds. Facebook’s whistleblower Frances Haugen exposed that Facebook did something similar: the company’s algorithms would score posts on factors like their predicted risk to “societal health” or potential for misinformation, then demote those deemed risky washingtonpost.com. Users were never told that their post was algorithmically buried due to a low score – they simply saw dramatically less engagement and had to guess why.
Muting (Ghost Banning): In the most extreme form, the platform makes your content invisible to everyone but you. This was Twitter’s own stated definition of a “shadow ban”en.wikipedia.org, though Twitter claimed it didn’t actually do this broadly. However, anecdotal reports abound – for instance, a Reddit user might discover their posts get zero interaction and later learn moderators had “ghost-banned” them so that nobody else ever saw their contributions. On a site like Instagram, a shadow ban often means your posts don’t show up under hashtag searches, effectively removing you from public visibility while leaving you none the wiser.
Auto-Forgetting: This is a less-discussed tactic, but users and some developers have noticed it. Content might get normal traction at first, but then the algorithm quickly forgets about it – it stops being shown soon after posting, even if it was doing well. Essentially, the shelf-life of the post is artificially shortened. This could be a way to let something appear (avoiding outright removal) but ensure it doesn’t stay visible long enough to gain momentum. It’s akin to the platform quietly sweeping a topic under the rug once a brief time window has passed.
Viewer comment from my Youtube channel (SaltCubeAnalytics)
These practices are often shrouded in secrecy. Companies worry that if they reveal exactly how they throttle or boost content, bad actors will game the system. But that opacity also means users are left guessing, often oscillating between “Am I shadow banned or do people just not like my post?” In the words of one digital rights expert, “something feels off, but [users] can’t see from the outside what it is, and feel they have little power to do anything about it”washingtonpost.com. That uncertainty – the self-doubt bordering on paranoia washingtonpost.com – is actually part of what makes shadow banning so pernicious. If you knew for sure the platform was muting you, you could raise a stink or demand an explanation. Instead, you’re left to second-guess yourself.
Narrowing the Feed, Numbing the Mind
For everyday users, the immediate impact of shadow banning and algorithmic censorship is a subtle narrowing of the online world. Your feed gradually homogenizes. Controversial or dissenting voices start to vanish from the conversation, leaving a chorus of agreement. You might notice that all the posts you see on a contentious issue mirror one viewpoint – giving the impression that “everyone sensible agrees.” This false sense of consensus is manufactured by omission. When social networks quietly purge or suppress one side of a debate, they create a distorted reality where the censored perspective might as well not exist. As one Yale University study demonstrated, a platform can selectively mute certain posts and amplify others in a way that “appeared neutral to an outside observer”, yet over time it “shifted users’ positions” and even increased polarization in the simulated network insights.som.yale.edu(1) insights.som.yale.edu(2). In a chilling analogy, the researcher described it “like a frog sitting in a pot of water…relaxing, and suddenly, he’s cooked.” A social network could “drive people towards one point of view” so gradually that neither the users nor even watchdogs realize it – any regulators inspecting the system would see content being moderated on “both sides,” not realizing one side is being strategically turned down more than the other insights.som.yale.edu. By the time anyone notices, “suddenly everybody thinks the earth’s flat”insights.som.yale.edu (as the Yale researcher quipped) – or whatever singular narrative the hidden hand has nudged into place.
The psychological impact on users who are targeted by shadow bans is equally profound. Imagine you’ve been posting about a cause or topic you’re passionate about. One day, your engagement (likes, comments, shares) plummets. Posts that once sparked lively discussion now fall into a void. You wonder: Did I suddenly become boring? Did I offend my followers? You might blame yourself, feel “filled with self-doubt”, or even start to question your own sanity washingtonpost.com. This is gaslighting via algorithm – the system never tells you it disapproved of your content, so you’re left to internalize the failure. Many users respond by self-censoring: they withdraw from discussing the “edgy” topics that got them in trouble (even if they can’t prove that’s what caused the drop). Over time, the algorithm has trained them to stick to safer, more anodyne content. The range of acceptable discourse narrows, as each person individually adjusts to avoid the mysterious punishment of invisibility.
Even those of us who aren’t directly censored can feel a kind of ambient psychological effect. Social media feeds are where we construct our understanding of what other people think – our community, our nation, the world. If those feeds are pruned and shaped, our emotional calibration to reality is based on skewed input. During major events – say, a war or an election – this can create confusion or complacency. For example, if opposition voices are algorithmically muted in your feed, you may feel an eerie “emotional confusion” when you sense something is wrong (maybe you hear about protests or dissent elsewhere) yet see everyone online marching in virtual lockstep. It’s digital gaslighting at scale: you might doubt your own judgment or even the evidence of your senses because the collective sense presented to you has been sanitized.
Consider a relatable scenario: YouTube’s auto-play and recommendation engine. Perhaps you remember in years past falling down a rabbit hole of diverse content – some of it outlandish, sure, but a mix of viewpoints. Fast forward to today, and many users report that YouTube’s algorithm feels “tamer.” That’s by design. YouTube (owned by Google) has reportedly tweaked its algorithm to aggressively down-rank “borderline content” – videos that don’t outright violate rules but brush against them (conspiracy theories, controversial political views, or simply a point of view that goes against the mainstream narrative etc.). Independent creators in 2024-2025 noticed that if they discussed topics like the Ukraine war or Israels slaughter in Gaza and the West Bank, their videos would mysteriously stop getting recommended, whereas mainstream network clips on the same topic would flood the recommendations. The result? An average user’s YouTube feed gravitates more to establishment-friendly content. If you only passively watch what YouTube serves up, you might never encounter the dissenting voices at all. Your digital environment has been consciously narrowed, though you’d never know unless you went actively looking for what’s missing.
Now multiply that by Facebook, by Instagram, by TikTok – each with its own flavor of algorithmic confinement. On Instagram, artists and educators have noticed their reach shrink when they post about sensitive subjects (e.g. sexuality, or even discussing Palestinian human rights). Indeed, a report by the Al Jazeera Centre for Studies found that pro-Palestinian content was disproportionately affected by practices like shadow banning and algorithmic censorship, effectively erasing many Palestinian voices from the online conversation. Posts about Gaza or Palestinian suffering saw reduced visibility and engagement – a quiet dampening that made advocacy and truth-telling far less effective. The distortion is two-fold: on one side, the Israeli narrative and sympathetic voices are allowed to dominate unchallenged; on the other, the Palestinian perspective is literally hidden from view for much of the global audience. This creates a powerful, and false, sense that the “whole world” uniformly supports one side, potentially swaying public opinion simply by silencing counter-narratives. As the study noted, these tactics raise grave concerns about “the erasure of Palestinian voices” in digital spaces studies.aljazeera.net.
In psychological terms, shadow banning leverages our brain’s wiring against us. Humans are social creatures; we crave feedback and interaction. When those disappear, it’s painful. When our posts get no reaction, it’s a form of social punishment – and we instinctively try to avoid punishment. This is where behavioral conditioning comes in.
The Skinner Box of Social Media: Conditioning and Control
It may be unsettling to realize, but social media platforms operate much like a Skinner box, the classic behavioral experiment chamber where animals are trained with rewards and punishments. In our case, we are the lab rats, and the currency is attention. The platforms use a variety of psychological tricks – operant conditioning, intermittent reinforcement, dopamine feedback loops – to shape our behavior and keep us engaged. Normally, these tricks are discussed in the context of making us scroll endlessly (i.e. addiction), but they are equally relevant to how censorship is enforced algorithmically.
Operant conditioning is the idea that behaviors can be shaped by rewards and punishments. Post something the platform likes (say, a fluffy family photo or a topic that boosts engagement without controversy) and you get a flood of likes, positive comments, new followers – a rewarding hit of social validation. Post something the platform deems “undesirable” (maybe a politically charged opinion, or content that triggers a moderation algorithm) and you get silence – or the reward is removed by way of a throttled reach. Do it repeatedly, and perhaps you notice your follower count growth stagnates, or your posts consistently languish with zero engagement. These are negative consequences the system delivers. Over time, like a pigeon in a Skinner box, you learn which lever to press for food and which one shocks you. Users adapt: they either stop posting the punished content or move to platforms where that content isn’t punished. In behavioral psychology terms, the platform has extinguished the undesirable behavior through negative reinforcement (making the experience unrewarding).
The social media algorithms add an extra twist: intermittent reinforcement, meaning the rewards come unpredictably. Research has shown that rewarding a behavior sporadically, rather than consistently, makes it far more resilient and addictive motleyrice.com(1) motleyrice.com(2). This is the principle that makes slot machines so compelling – you never know when the jackpot (or even a small payout) will hit, so you keep pulling the lever. Social media’s analog: you post and sometimes the algorithm lets you blow up with likes, other times you get nothing. That unpredictability can actually hook you harder than consistent success. It also keeps you guessing: maybe the problem isn’t what I posted, maybe I just posted at the wrong time? Or perhaps I need to use different hashtags? Users often experiment and keep chasing the high of a successful post, thereby playing right into the platform’s engagement-boosting aims.
From a censorship perspective, intermittent reinforcement can be used to keep shadow-ban victims in the dark. If every single time you posted about Topic X your engagement was zero, you might quickly suspect suppression and cry foul. But if occasionally one of your Topic X posts does semi-normally (perhaps because a particular triggering keyword wasn’t present, or moderation was momentarily lax), you’re more likely to stick around and keep posting – hoping for the next hit. The doubt persists: “Maybe it’s just random. Maybe I’m not shadow banned after all. Maybe I need to do something slightly different and it’ll take off.” The platform, in effect, avoids the point at which a user’s frustration outweighs their desire to participate.
Then there’s prediction error – a concept from neuroscience describing the gap between what we expect and what actually happens. Our brains treat unexpected outcomes as significant – they prompt us to learn and adjust. Social media algorithms manipulating your reach create constant prediction errors. Perhaps you expected a provocative post to get lots of reactions (because it normally would among your followers), but instead it flops – that gap between expected feedback and actual silence is glaring. You register, even if subconsciously: something about that post didn’t work; maybe I shouldn’t go there. Conversely, when you hew to what the algorithm favors, you might get an unexpected boost (“Wow, a thousand views on that trivial video I posted!”), reinforcing the idea that this is the kind of content I should stick to. In essence, the algorithms are training us like Pavlov’s dogs, tinkering with the stimuli (visibility, likes) to prompt desired reactions.
The addictive nature of social media also plays a role in trapping users within these confines. Platforms exploit brain chemistry – notably the neurotransmitter dopamine – to keep us hooked. Each like or positive notification gives a small dopamine hit, a feeling of pleasure. We begin to crave it. Over time, our brains even start releasing dopamine in anticipation of checking our apps motleyrice.com. This is why you might feel an urge to check your post’s performance a dozen times in the hour after you share something. Now combine that with algorithmic throttling: if you’re shadow banned, those expected rewards don’t come, but the craving for them remains. You might double down, posting more frequently or more outrageously in hopes of regaining favor – ironically generating more content (and thus more platform engagement) in the process. It’s easy to see how a user can spiral, emotionally, when they’re unknowingly trapped in this manipulative operant conditioning cycle. They oscillate between frustration (no one is engaging) and hope (maybe the next post will break through), a cycle familiar to any gambler. Indeed, experts have directly compared social media’s intermittent rewards to slot machines, where “users don’t know when they’ll get the rush… This encourages them to keep … scrolling for the next gratifying ‘like’”motleyrice.com.
What does all this mean for the bigger picture of information control? Essentially, the sophisticated psychological design of these platforms – meant to maximize profit by maximizing time-on-site – also provides a perfect apparatus for cognitive manipulation at scale. If you want to subtly steer a population’s beliefs or behaviors, you couldn’t ask for a better tool than an algorithmic feed that people check obsessively and that you can tune invisibly. It’s like having millions of people voluntarily enter a behavior-modification chamber every day after lunch, each bringing detailed profiles of their fears, preferences, and social connections along with them. No need for brute-force propaganda or overt censorship when you can achieve similar ends with a finely tuned algorithmic dial.
Collusion of Power: Big Tech and Big Brother
It’s no coincidence that as social media became the dominant public square, Western governments and security agencies took keen interest in how to influence and control information on these platforms. In the past few years, evidence of direct coordination between Big Tech companies and state actors in the U.S. and Europe has steadily emerged – painting a picture of a tightly knit censorship-industrial complex. The motives aren’t hard to fathom: governments want to combat what they call “disinformation” (especially from foreign adversaries) and maintain public order, while platforms want to curry favor with regulators and tap lucrative government contracts. Together, they share an interest in shaping public perception – and shadow banning algorithms are an ideal instrument to do so quietly.
Consider the revelations from the U.S. lawsuit Missouri v. Biden (also known as Murthy v. Missouri at the Supreme Court). This case uncovered how U.S. federal officials regularly flagged content and leaned on social media companiesto remove or downrank posts on topics like COVID-19 and elections. While a federal judge initially found that the government had likely “coerced” or “significantly encouraged” platforms to censor content, the Supreme Court ultimately dismissed the case on technical grounds (standing) scotusblog.com. Yet even in dismissal, the judges didn’t deny the underlying phenomenon – in fact, Justice Samuel Alito warned in dissent that this could be “one of the most important free speech cases in years,” noting evidence that “the White House coerced Facebook into censoring” certain speech scotusblog.com. The legal back-and-forth aside, the case confirmed what many had suspected: there was a constant backchannel between government agencies and tech platforms, with emails and calls flying whenever officials wanted something suppressed (be it alleged misinformation about vaccines or inconvenient news stories). This wasn’t a conspiracy theory; it was documented. One internal Facebook email (revealed in a related transparency report) famously quoted a White House official berating the company: “You are killing people” – implicitly demanding they censor more COVID “misinfo.” Facing the immense regulatory and public relations power of the U.S. government, companies often complied.
On Twitter, the situation prior to Elon Musk’s takeover in late 2022 was laid bare by the now-famous Twitter Files – internal documents Musk authorized for release. Among the bombshells: Twitter had a whole system for visibility filtering (VF) accounts, complete with internal labels like “Trends Blacklist” or “Search Blacklist”, to quietly limit reach en.wikipedia.org. More disturbingly, Twitter was directly aiding U.S. military propaganda efforts. In 2017, at the request of the Pentagon, Twitter put 52 Arabic-language accounts run by U.S. Central Command (CENTCOM) on a special “whitelist”, exempting them from spam filters and giving them algorithmic boosts aljazeera.com. These accounts, which did not disclose their U.S. military ties, pushed narratives touting the U.S. and its allies (like Saudi Arabia) and demonizing their adversaries: for example, praising the precision of American drone strikes, promoting criticism of Iran, and justifying the Saudi-led war in Yemen aljazeera.com. In emails, Twitter executives acknowledged this was covert propaganda and discussed how to keep the Pentagon’s sock-puppet network from being exposed aljazeera.com. Yet they allowed it to continue for years. The platform that loudly announced crackdowns on “state-sponsored bots” was quietly assisting state-sponsored bots – as long as they were Western. This stark double standard shows how aligned Silicon Valley could be with Washington’s strategic communications goals. When Twitter was caught in 2022, it issued no press release about banning U.S. military fake accounts, even as it regularly publicized takedowns of Chinese or Iranian influence networks.
Such coordination isn’t limited to Twitter. Facebook (Meta) has had extensive interaction with U.S. government officials over what content to police – from election misinformation to foreign influence. Whistleblower Frances Haugen’s leaks in 2021 showed Facebook’s leadership was in regular contact with U.S. authorities, and internal docs even referenced complying with requests from “government partners” to throttle certain posts. And during the 2020 election and January 6, 2021 Capitol riot aftermath, platforms like Facebook and YouTube followed government briefings and advisories closely, removing whole categories of content (sometimes overly aggressively, as some internal post-mortems admitted). In the name of national security and preventing violence, a de facto alliance between tech and state emerged.
Across the Atlantic, the European Union embarked on its own heavy-handed partnership with Big Tech. The EU’s 2022 Digital Services Act (DSA) created a code of practice for online platforms to clamp down on disinformation, effectively deputizing them to censor content Europe deemed harmful (like Russian state media narratives). More brazenly, in May 2025 the EU for the first time imposed sanctions on individual journalists and pundits for spreading “propaganda” related to the Russia-Ukraine war. Citing an “international Russian campaign of media manipulation and distortion of facts,” Brussels added over a dozen people to a sanctions blacklist – including two European citizens, German journalists Alina Lipp and Thomas Röper, who had voiced pro-Russian views mronline.org. This measure, part of the EU’s 17th package of Russia sanctions, was enacted via Council Decision (CFSP) 2025/966 on May 20, 2025 mronline.org. It marked an astonishing precedent: the EU effectively criminalized certain speech by Europeans, equating it with a threat to security. One Swiss observer noted in dismay that “the EU has degenerated into a dictatorship [that] restricts freedom of expression to an unacceptable extent”, warning that Orwellian times had arrived mronline.org. (Tellingly, neutral Switzerland refused to mirror that particular sanctions step, citing the high value it places on press freedom mronline.org.)
When speech is sanctioned at such high levels, it sends a clear signal to platforms. If the EU has formally declared these individuals and their outlets persona non grata (travel bans, asset freezes, etc.), one can imagine that YouTube, Facebook, and others will feel compelled to bury or ban their content in EU jurisdictions – if they haven’t already. Indeed, since early 2022, Europe outright banned Russian state media channels RT and Sputnik from the airwaves and the internet consilium.europa.eu. Tech companies like Google and Meta swiftly complied, not only blocking EU users from accessing those outlets’ pages but also aggressively de-ranking any content that slipped through. This was unprecedented in Western democracies – banning media en masse – and it showcased how, under geopolitical stress (war in Ukraine), Western governments are willing to jettison free-expression principles and enlist Big Tech in doing so. As the EU’s own statements made clear, this was about “suspending broadcasting” of disfavored voices and even targeting “physical elements of digital networks” tied to such information threats consilium.europa.eu.
Behind these public moves, there’s a deeper synergy. Big Tech companies increasingly are woven into the Western military-intelligence apparatus – not just ideologically, but financially. Amazon, Microsoft, Google, and to a lesser extent Meta, have scored major defense and intelligence contracts in recent years. This means Silicon Valley’s bottom line is now partly tied to government largesse. For example, Amazon Web Services runs cloud servers for the U.S. intelligence community, and Amazon and Google are both core contractors in the Pentagon’s $9 billion Joint Warfighting Cloud program for advanced military cloud computing intereconomics.eu. Google notoriously had an internal revolt in 2018 over “Project Maven,” an AI project to improve U.S. drone targeting; Google backed out of that contract after employee protests, but Microsoft and Amazon eagerly took its place intereconomics.eu. In 2024, Google was caught quietly expediting AI services for the Israeli military during the Gaza war, even after Google had fired employees who protested doing business with Israel aurdip.orgaurdip.org. Internal documents revealed Google Cloud executives in October 2023 urgently sought to provide advanced AI tools to the Israel Defense Forces – specifically to prevent the IDF from switching to Amazon’s cloud – effectively rushing to help power Israel’s war operations with Google AI aurdip.orgaurdip.org. The profit motive (don’t lose a big military client to a competitor) aligned perfectly with a geopolitical motive (support an ally’s war effort). What chance does a few controversial voices on YouTube have in the face of such priorities? If suppressing some content is the price of securing government favor and contracts, it seems many tech companies are more than willing to pay it.
Meta (Facebook’s parent) also dove into the defense world. In late 2024, Meta announced it would allow U.S. national security agencies and defense contractors to use its cutting-edge AI model, LLaMA – despite an earlier policy banning military AI use theguardian.com. Meta essentially carved out a special exception so that the U.S. and its “Five Eyes” allies (UK, Canada, Australia, NZ) could leverage its AI tools for defense purposes theguardian.com. The company portrayed this as a patriotic duty to help the West stay ahead of China in AI. Moreover, Meta inked a partnership with Anduril, a defense tech firm, to build AI-powered combat goggles for the U.S. military qz.com. So we have Facebook making money supplying militaries, while simultaneously (through its platforms) controlling information flows among civilians. It doesn’t require a conspiracy theory to see a conflict of interest: if, say, the U.S. government strongly desires a certain narrative about a war or foreign policy issue, Facebook’s incentives (financial and political) align with granting that desire via its algorithmic levers.
This blurring of the line between public and private propaganda is sometimes dubbed the new “Digital Military-Industrial Complex.” During the Cold War, arms contractors built missiles and jets; today, tech giants provide cloud computing, AI, and information dominance. A 2025 analysis in Intereconomics noted that U.S. federal contracts awarded to Big Tech firms increased thirteen-fold from 2008 to 2024 as the Pentagon recognized that controlling data and networks is as crucial as controlling the skies intereconomics.eu. With that interdependency, Big Tech and Big Government become, in effect, partners. As one academic put it, controlling digital networks allows for “weaponizing interdependencies” – you can leverage the dominance of platforms to impose your will on the information landscape intereconomics.eu. And it’s virtually impossible to do so without the “support of Big Tech, as the latter controls [the] knowledge, technologies… and physical infrastructures” of the digital domain intereconomics.eu. Translation: If governments want to wage information war or control narratives, they need the tech firms on board – and the firms are on board, both out of conviction and out of profit-seeking.
Cognitive Warfare: The Battlefield of Your Mind
What’s remarkable is that Western military and intelligence strategists are openly talking about these tactics – albeit usually framed as measures against foreign enemies. There’s a buzzword: “cognitive warfare.” NATO documents describe the “cognitive domain” as the new front line, a space of conflict where the goal is to influence, even “alter, how a target population thinks – and through that how it acts” frontiersin.org. In plain language, cognitive warfare means messing with the perceptions, beliefs, and decision-making processes of an adversary, using everything from propaganda and psychological operations to cyber tools and AI. A 2024 Frontiers journal analysis of NATO’s concept explains that it involves exploiting human cognition (how we think) and technology to “disrupt, undermine, influence, or modify human decision-making”frontiersin.org. Unlike traditional information warfare, which focuses on controlling information flow, cognitive warfare aims at controlling the audience’s reaction to information frontiersin.org. It’s about shaping thoughts as much as (or more than) censoring words.
NATO and allied militaries insist these methods are necessary to counter disinformation from autocratic adversaries like Russia or terror groups – and indeed, Russia and others practice their own heavy-handed forms of cognitive warfare, from troll farms to state TV propaganda. But the danger is that these tools, once developed, don’t distinguish between foreign enemies and domestic dissent. When NATO strategists talk about the need to “prepare societies” to confront cognitive threats nato.int, it can bleed into conditioning one’s own population to be resilient – or compliant – in the face of certain narratives. For example, NATO has run studies on how to inoculate the public against “fake news” (a seemingly benign aim) and also on how to leverage social media to push NATO’s view of conflicts (more problematic because it shades into manipulation).
Recent Western security documents increasingly emphasize “whole-of-society” resilience to “disinformation” – essentially, getting the public, media, and tech platforms aligned to push back on narratives deemed harmful. The G7 nations, in their 2023 and 2024 meetings, spoke repeatedly about building “digital resilience” and a “shared framework” for trustworthy information online atlanticcouncil.org. While couched in defensive terms, this often entails closer coordination with social media companies to tweak algorithms and moderation in line with government objectives. The subtext: liberal democracies must get savvier at information control, or else risk losing the “narrative war” to autocrats. In the U.S., the Pentagon and State Department have funded research into “effective counter-messaging” and even deception tactics to influence online discourse (all supposedly in the name of fighting extremist ideology or foreign propaganda).
It is a short leap from countering propaganda to using propaganda. An infamous U.S. military slide deck circulated in 2021 described “cognitive security” and detailed how adversaries seek to sow distrust within democracies. To “defend” against that, one might argue, we need to ensure the population trusts the narratives from our side. And how do you build trust in your narrative? Perhaps by suppressing those who contradict it and amplifying those who support it – precisely what shadow banning accomplishes, albeit under the radar.
This is where the hypocrisy becomes glaring. Western leaders often lambast China’s Great Firewall or Russia’s draconian media laws, pointing out (correctly) that those regimes blatantly censor and propagandize. Western societies, by contrast, hold up free speech as a core value. Yet here we are: building a system of covert censorship that is arguably more insidious because it hides behind the façade of free choice and private-sector independence. A Chinese or Russian Internet user largely knows that certain sites are blocked and that state censors patrol online content. A Western Internet user, however, is meant to think all voices compete openly on merit – unaware that the game is rigged by unseen algorithms “for their own good.”
In effect, the West has found ways to practice the tactics of authoritarian information control while maintaining plausible deniability. If challenged, officials and tech execs say: No, we’re not censoring – we’re just removing disinformation or reducing harmful content visibility. They invoke safety, public health, national security. These are legitimate concerns, to be sure. But one must ask: who decides what is “disinformation” or “harmful”, and might that power be abused? When an EU decision literally sanctions journalists for “propaganda” mronline.org, and a U.S. White House leans on Facebook to throttle views it dislikes scotusblog.com, we’re in dangerous territory for democracy. As one Substack commentator grimly put it, Orwell’s “Ministry of Truth” has become reality – just with a friendly Big Tech interface.
The targets of Western algorithmic censorship are often framed as fringe or extremist – anti-vaxxers, Russian state media, conspiracy peddlers, hate speech mongers. Indeed, those groups have been hit. But the net has a tendency to widen. Post-Ukraine war, we’ve seen even staunch anti-war voices, human rights activists, and independent journalists critical of NATO or allied governments getting swept up as supposed purveyors of “Russian narratives.” Post-Iran conflicts, pro-Palestinian and antiwar Westerners have found their posts buried or accounts suspended, accused of spreading “terrorist propaganda” simply for highlighting civilian casualties or criticizing allies’ actions. There’s a clear pattern: when Western governments engage in military action or face a geopolitical showdown, domestic social media discourse is increasingly curated to support the official line. This may be done in the name of unity or combating enemy influence, but it erodes the very liberal values of pluralism and dissent that we claim to uphold.
If you want to know more about cognitive warfare, in addition to the video you saw above, I wrote a detailed report about it here:
The Illusion of Liberal Democracy?
All of this forces a uncomfortable question: Are we, in the liberal democratic West, living in a kind of curated informational bubble not terribly different from the societies we critique? The methods differ – we don’t (usually) have explicit state censors deleting articles or police arresting people for tweets (though even that has happened already under “hate speech” or “misinformation” laws in multiple countries). Instead, we have algorithmic enforcers and corporate proxies. It’s privatized and plausibly deniable. If you try to call it out, you may be labeled a conspiracy theorist or simply shouted down with “It’s a private company, it can do what it wants.”
The hypocrisy is worth underscoring. Western governments routinely accuse Russia, China, Iran, and others of manipulating information and suppressing truth. These accusations are often true. But at the same time, Western security agencies and regulators are perfecting their own subtle systems of perception management. A former NATO researcher has argued: Well, our intent is to protect people from harmful lies, theirs is to spread harmful lies. Intentions do matter. However, the road to hell is paved with good intentions, as the saying goes. Once you normalise shadowy censorship for some noble cause, the temptation to use it for less noble causes grows.
We should be clear: Western shadow banning isn’t centrally directed by a single Ministry of Censorship. It’s an ecosystem. Governments nudge or pressure; tech companies internalize the state’s priorities (or share them culturally); algorithms get tweaked accordingly; and users adapt their behavior. The danger is that this ecosystem becomes a self-reinforcing cycle. Each time it proves effective – say, limiting the spread of “undesired” narratives during a crisis – those in power grow more confident in deploying it, and those out of power become more demoralized or marginalized.
In liberal democracy theory, the competition of ideas and the ability to criticize the government or prevailing orthodoxy is sacrosanct. That’s what differentiates us from authoritarian regimes. Yet if most citizens only ever see approved ideas trending and if dissent is algorithmically quarantined to the fringes, do we still have a meaningful competition of ideas? One could argue we are sliding into a kind of “soft totalitarianism,” where control is exercised not through open force but through the invisible architectures of tech that shape our cognition daily. It’s soft – you won’t be hauled off to a gulag for speaking your mind – but it’s total in that it can influence millions without them realizing.
Western establishments often defend these practices by saying, “We’re not banning speech, you can still say it – we’re just not going to amplify it.” This is often the refrain from platform executives: they prefer “counter speech” and de-ranking to outright removal. But when all major platforms adopt that stance in harmony with state pressure, the practical difference between de-amplification and banning starts to blur. Speech that can’t be heard might as well not exist. Freedom of speech without freedom of reach, as some say, is a hollow freedom.
Fighting the Invisible Muzzle: What Can Users Do?
While the system is complex and tilted against the average person, there are ways to detect and push back against shadow banning and algorithmic confinement. Here are a few tips for the savvy social media user:
Cross-check your visibility: If you suspect you’re shadow banned, don’t rely on your own feed. Log out or use a friend’s account to search for your posts, or have a trusted colleague see if your content appears in their feed/search. For example, if you hashtag a tweet or Instagram post, check in a private/incognito window whether it shows up under that hashtag. If it doesn’t, you might be shadow banned on that tag.
Monitor engagement patterns: A sudden, sustained drop in engagement (especially if your follower count is steady or growing) could indicate throttling. Compare notes with peers. If several users in a similar niche all see a dip at once, maybe the platform changed its algorithm or imposed a new rule. Creators often share such intel informally.
Use transparency features: Some platforms now offer some insight. Instagram’s Account Status will tell you if your posts are blocked from recommendations washingtonpost.com – check it regularly. Twitter (X) has promised to label “visibility limited” on some tweets (though enforcement is spotty). YouTube’s analytics can sometimes hint at issues – e.g. if YouTube notifies you of a guideline warning or if traffic from “Up next” recommendations vanishes, that suggests demotion.
Appeal and inquire: If you find out your content was limited (or even if you just strongly suspect it), make use of appeals processes. It’s often a black hole, but sometimes a successful appeal can restore reach. Also, speak up publicly (on other platforms or via blogs) about unfair suppression – occasionally companies will reverse a decision to avoid bad PR.
Diversify your platforms: The best insurance against any one platform’s shadow ban is to not put all your communication eggs in one basket. Use multiple social networks, plus your own website or newsletter. If you’re muzzled on Twitter, you might still reach people on Telegram or Mastodon. Decentralized platforms or those with chronological feeds (where you see all posts from followed accounts) can offer a censorship workaround – though they may not have the reach.
Opt out of the algorithm when possible: Some apps allow you to switch to a chronological or “following only” feed (Instagram and Facebook have this option; Twitter did until recently removing it by default). Using that can sometimes bypass algorithmic filtering. Similarly, turning off personalized recommendations can reduce the degree of invisible curation (though it may limit what you see overall).
Leverage encryption and peer-to-peer sharing: In more extreme censorship scenarios, consider alternative dissemination – like sharing information in private encrypted groups (Signal, WhatsApp), or using peer-to-peer file sharing for important videos that might vanish from YouTube. A robust offline or off-platform network can’t be shadow banned.
Support transparency efforts: Push for laws or policies that require platforms to inform users when their reach is limited and why. The EU’s DSA is one attempt at this (requiring some transparency to users), though it comes with its own censorship baggage. Greater algorithmic transparency and user rights to appeal are crucial. As a user, staying informed and joining digital rights advocacy groups can amplify the demand for accountability.
Ultimately, awareness is your first line of defense. If you understand that the game is rigged, you won’t be as easily fooled by the illusion of consensus or as likely to blame yourself when a post mysteriously flops. Even if you can’t fully break the shadow ban, knowing it exists is empowering – it allows you to mentally discount the skew and perhaps find creative ways around it.
Conclusion: Breaking the Spell
The notion that liberal democracies would employ “invisible strikes” against their own citizens’ speech – silent, deniable blows to free expression – feels like a betrayal of core values. And yet it’s happening, bit by bit, click by click. We’ve entered an era where your government might not openly silence you, but it doesn’t mind if Google’s or Meta’s algorithms do it quietly. In the U.S., this was supercharged by fears of foreign interference and internal unrest; in Europe, by war on the continent and restive populations. High-minded principles were, and are, being reinterpreted through a lens of security: what good is unfettered speech if it allows lies that destabilize society? The trouble is, once you accept that logic, those in power will always find something destabilizing about criticism or inconvenient truths.
For informed citizens and users, the task ahead is to demand transparency and accountability in how our digital public squares are governed. We should insist that content moderation be done in the sunlight – if a platform thinks a piece of content is problematic, let it be upfront and give the user a chance to respond or appeal. The shadowy approach must be rejected. As one tech writer argued, “Companies need to tell us exactly when and why they’re suppressing our megaphones”washingtonpost.com. Anything less is an affront to the idea of an open society.
We also need a societal gut-check: Are we comfortable with the Western state marrying Big Tech to engineer consent, even for causes we largely agree with today? What happens when those tools are turned to causes you don’t agree with, or wielded by a different political faction? The weaponization of algorithmic censorship sets a precedent that could outlive its initial targets. The only real safeguard is a public that is aware and watchful.
Shadow banning thrives on ignorance – the less you know about it, the better it works. By dragging these mechanisms into the light, we begin to break the spell. Remember that your digital environment is curated: question why you see what you see. Seek out voices beyond the algorithm’s comfort zone. And support those who are brave enough to call out the new propaganda system for what it is, even at risk of being cast into the algorithmic shadows themselves.
In the end, the health of our democracy may well depend on defeating the invisible censorship that has crept into our lives. We must rekindle the norm that truth and falsehood should fight it out in the open, not have one side quietly smothered in the crib. Only through transparency, pluralism, and a bit of righteous outrage can we hope to reclaim our digital public square – to make sure that the “shadows” in our feed are cast by the light of diverse opinions, not by the unseen hand of an algorithmic cage.
Sources:
Hunter, T. (2024). Everything we know about ‘shadowbans’ on social media. The Washington Post (Oct 16, 2024) – Explains how algorithms quietly suppress content and calls for transparency washingtonpost.com.
Gilbert, K. (2024). How Shadow Banning Can Silently Shift Opinion Online. Yale Insights (May 09, 2024) – Research showing that selective muting/amplifying of posts can shift users’ opinions while appearing neutral insights.som.yale.edu.
MR Online (2025). Switzerland…does not collaborate in the persecution of journalists. (July 02, 2025) – Notes EU Council Decision (CFSP) 2025/966 sanctioning German journalists for “propaganda,” likening it to Orwellian tactics mronline.org.
Consilium (EU) Press Release (2025). Russian hybrid threats: EU lists further individuals… (May 20, 2025) – Announces suspension of Russian media and broadened powers to target “information manipulation,” reflecting EU’s censorship measures in war context consilium.europa.eu.
Al Jazeera (2022). Twitter secretly boosted US psyops in Middle East. (Dec 21, 2022) – Investigation based on Twitter Files revealing Twitter “whitelisted” U.S. military propaganda accounts, amplifying pro-US narratives (anti-Iran, pro-drone strike) covertly aljazeera.com.
Barrett, A. (2024). Justices side with Biden over government’s influence on social media… SCOTUSblog (Jun 26, 2024) – Supreme Court opinion (Murthy v. Missouri) noting lack of standing but with dissent highlighting evidence of White House coercing Facebook to censor content scotusblog.com.
Fowler, G. (2022). Shadowbanning is real: How social media decides who to silence. The Washington Post (Dec 27, 2022) – In-depth report on platforms’ use of “silent reduction,” how Facebook’s algorithms score and demote “borderline” content, and calls for accountabilitywashingtonpost.com.
Deppe, C., & Schaal, G. (2024). Cognitive warfare: a conceptual analysis… Frontiers in Big Data (Nov 2024) – Academic analysis of NATO’s cognitive warfare concept, defining it as targeting human cognition to influence decision-makingfrontiersin.org.
Al Jazeera Centre for Studies (2023). Digital Occupation…Battle for Narrative in Gaza. – Highlights disproportionate shadow banning and algorithmic censorship of pro-Palestinian content on social platforms, which suppresses Palestinian voices and skews narrativesstudies.aljazeera.netstudies.aljazeera.net.
Bhuiyan, J. (2024). Meta to let US national security agencies and defense contractors use Llama AI. The Guardian (Nov 5, 2024) – Reports Meta’s exception for U.S. military to use its AI, indicating deepening cooperation between Big Tech and defensetheguardian.com.
Intereconomics (2025). Big Tech and the US Digital-Military-Industrial Complex. (Vol. 60, 2025) – Examines the interdependency of Big Tech and the state; notes 13-fold rise in Big Tech’s military contracts 2008–2024 and how tech infrastructure is integral to modern warfareintereconomics.eu.
De Vynck, G. (2025). Google rushed to sell AI tools to Israel’s military after Hamas attack. The Washington Post(Jan 21, 2025) – Reveals Google’s internal push to provide AI (Vertex and Gemini) to IDF during Gaza war, despite prior employee protests over such military collaboration aurdip.org.
Lee Fang (@lhfang). (2022). Twitter Files Part 8: Pentagon’s covert online PsyOp campaign. – Cited via Al Jazeera report, details Twitter execs’ emails showing awareness and tacit approval of DoD’s network of fake accounts and Twitter’s efforts to not expose them aljazeera.com.
Center for Democracy & Technology (2022). Speech Moderation and Shadow Banning Survey. – (Referenced in WaPo) Found ~10% of U.S. social media users suspect they’ve been shadow banned, and marginalized communities (activists, educators) report experiences of reduced visibility washingtonpost.com.
Instagram Comms (2022). Account Status – Not Eligible for Recommendations. (Meta announcement) – Instagram’s feature to tell users if their posts won’t be recommended to others, an acknowledgment of de-amplification practices washingtonpost.com.
Motley Rice (2023). Why is Social Media Addictive? – Explains how intermittent reinforcement on social media (unpredictable likes/comments) mirrors slot machine mechanics, fueling addictive use motleyrice.com. Highlights dopamine and reward loops exploited by platform design motleyrice.com.
Atlantic Council (2025). G7 leaders…strengthen digital resilience. (June 6, 2025) – Discusses G7 agenda on digital policy and resilience against hybrid threats, implying coordinated approach to information security among democracies atlanticcouncil.org.
Haugen, F. (2021). Testimony and Disclosures (Facebook Files). – (Referenced) Exposed Facebook’s internal “dangerous content” scoring system and tendency to downrank content near policy-violating, e.g. demoting content predicted to be misinformation or socially risky washingtonpost.com.
Consilium (EU) (2024). Regulation (EU) 2022/2065 – Digital Services Act. – (Referenced via MR Online) EU law requiring very large platforms to be more transparent about recommendation systems and content moderation, theoretically giving users rights regarding algorithmic decisions mronline.org.
Gillespie, T. (2018). Custodians of the Internet – (Referenced in WaPo) Coined “silent reduction” to describe hidden moderation; notes the frustration of users sensing censorship but lacking proof washingtonpost.com.
(The above sources are numbered for reference in this article. Citations in the text correspond to these sources.)