The digital landscape of X (formerly Twitter) is currently experiencing a seismic shift, as an aggressive crackdown on automated accounts has inadvertently swept up a significant number of legitimate human users. This large-scale purge, aimed at eliminating bots and spam, has resulted in the suspension and deletion of numerous “alt” accounts, many of which were used by individuals to privately curate and consume niche adult content, often referred to as “secret porn feeds.” The unforeseen consequences of this automated enforcement have left a trail of outrage and frustration among users who have lost years of meticulously compiled digital archives.
Justin Diego’s Unexpected Plunge into Anonymity
Justin Diego, a celebrity news influencer with a substantial combined following of 617,000 across YouTube and Instagram, is no stranger to the public eye. Yet, in 2024, he sought a different kind of digital presence on X. Recognizing the need for privacy away from his main, high-profile accounts, Diego created a secret X account. His intention was simple: to discreetly follow and keep tabs on his favorite OnlyFans creators. This burner account served as a private sanctuary, a space where he could bookmark and like solo content and masturbation videos without the scrutiny or association with his public persona. He never posted from this account, preferring the quiet anonymity it afforded him.
However, Diego’s quest for private digital curation came to an abrupt halt over a recent weekend. Upon logging into X, he was met with the notification that his secret account had been suspended. This was not an isolated incident but a ripple effect of a broader, more aggressive enforcement strategy being implemented by X. Diego, like countless others, found himself caught in the crossfire of a platform-wide war on bots, a battle whose automated weapons proved to be indiscriminate, hitting human targets alongside their intended robotic adversaries.
X’s Escalated War on Bots: A Double-Edged Sword
Beginning this month, X has significantly ramped up its efforts to combat automated accounts. Nikita Bier, X’s head of product, publicly boasted about the platform’s rapid pace of flagging and suspending bots, claiming an impressive rate of “208 bots per minute and growing” as of April 9. This campaign is a direct response to longstanding issues with fake, inactive, and spam accounts that have plagued the platform, undermining its integrity and user experience. The stated goal is to cleanse X of these digital nuisances, creating a more authentic and engaging environment for its users.
The company’s policy explicitly targets “inauthentic activity that undermines the integrity of X.” While this policy is designed to catch malicious bots attempting to manipulate engagement or spread misinformation, its broad interpretation by automated systems has proven problematic. Private accounts, often characterized by their low public activity (like posting) but high consumption (liking, bookmarking), are seemingly being misidentified. These “lurking” accounts, used by humans for personal curation rather than public interaction, are being flagged as if they were spam bots trying to artificially inflate engagement metrics. The irony is stark: accounts designed for genuine, if private, human interest are being deemed “inauthentic” by the very systems meant to identify artificiality.
The Catastrophic Loss for “Alts” and Their Keepers
The true scale of how many actual bots have been successfully purged from X since early April remains unclear, as the company has not responded to requests for comment. What is painfully clear, however, is the catastrophic impact this purge has had on human users. Thousands, if not millions, of individuals who maintained “alt” accounts – secret, secondary profiles – have seen years of their meticulously curated digital libraries vanish overnight. These alts were often dedicated to watching, bookmarking, and archiving their favorite adult content, creating personal, private feeds of media that resonated with them. The author of the original article even reported their own alt account, created in 2021 during the pandemic, was “nuked” over the weekend, highlighting the widespread nature of the issue.
The emotional toll of this loss is palpable across the platform. Tom Zohar, an actor based in San Diego, lamented the destruction of his digital collection, stating, “Not a single rule was violated mind you, years of curation and accumulation gone in a flash for no reason. The burning of the library of Alexandria’s got nothing on this tragedy.” This sentiment underscores the profound sense of loss experienced by users who viewed their alt accounts not just as mere profiles, but as personal archives, digital extensions of their interests and identities. Another user, @saintgoth, expressed disbelief: “6 yr old goon acc is suspended this cannot be real.” These reactions paint a vivid picture of a user base feeling unjustly targeted and stripped of their digital heritage.
Justin Diego articulates the core of the issue perfectly: “Sometimes people just need a page that’s specifically for them to engage with content they don’t want other people to know they’re into. That doesn’t make you a bot; that makes you human, actually.” This statement highlights the fundamental misunderstanding inherent in an overly aggressive, automated moderation system that fails to grasp the nuances of human behavior and the legitimate desire for privacy in digital spaces.
A Pattern of Automated Enforcement and User Blowback
This latest purge, while seemingly abrupt, is part of an ongoing and evolving initiative by X to control spam and inauthentic activity. In October, Nikita Bier’s team announced the removal of 1.7 million bots, primarily targeting reply spam, with a future focus on direct message spam. Leading up to April, Bier revealed that “nearly half of the product team” had pivoted to enhancing X’s “spam mitigation features,” prioritizing advanced bot detection systems and automated enforcement mechanisms. The increasing reliance on artificial intelligence and machine learning to police the platform is evident.
However, this shift towards automated moderation has consistently drawn significant criticism and blowback from users. One user, in a Change.org petition demanding account reinstatement, articulated this frustration: “Your overreliance on AI systems is not nearly as successful as you’d like to think it is. I mean let’s get serious here, these AI systems & LLMs can’t even distinguish between a real human account that’s been paying their premium subscription for 2 years & has as a credit card on file and is ID verified versus A BOT FROM NIGERIA OR SINGAPORE.” This powerful critique highlights the perceived failure of AI to differentiate between genuine, albeit unconventional, human activity and malicious bot behavior, even when users have actively verified their humanity through premium subscriptions.
The widespread mourning for lost accounts is further evidenced by posts like “A moment of silence for all the gooner accounts we’ve lost,” from a user known as buttmutt. These reactions collectively underscore the growing difficulty of striking a balance between aggressive platform enforcement and ensuring accurate, fair action that doesn’t penalize legitimate users.
X’s Troubled Moderation History Under Elon Musk
The platform’s struggles with moderation are not new; they have been a recurring theme since Elon Musk acquired X. Musk famously vowed to “defeat the spam bots or die trying!”—a promise that, despite intense efforts, has been met with mixed results and considerable controversy. Under his ownership, X has seen a documented surge in hate speech, harassment, and misinformation, leading many to question the effectiveness and consistency of its moderation policies.
More recently, X’s AI chatbot, Grok, faced widespread scrutiny after users discovered and exploited its image-editing feature to generate sexualized, nonconsensual deepfakes of women and minors. This incident raised severe safety and legal concerns, highlighting significant vulnerabilities in X’s AI and content moderation safeguards. The deepfake controversy further illustrated a platform grappling with the complexities of managing content, especially in sensitive areas, and the inherent risks of relying heavily on automated systems without robust human oversight. This history of moderation challenges provides crucial context, demonstrating that the current bot purge is not an isolated misstep but rather another chapter in X’s ongoing, often turbulent, journey to govern its digital ecosystem.
Collateral Damage to Queer and Trans Communities
Beyond the individual losses, the bot purge carries significant implications for specific communities. X has, for all its flaws, served as a crucial digital reservoir for consensual sexual media and queer education. For many queer and trans individuals, social media platforms like X offer vital spaces for finding information, exploring identities, and forming communities—especially in offline environments where such spaces may be scarce or unsafe.
Alexander Monea, an associate professor at George Mason University and author of The Digital Closet: How the Internet Became Straight, emphasizes this point: “When social media platforms purge sexual content, queer and trans creators are always collateral damage. The very communities that are most dependent on digital platforms for finding information, exploring their identities, and forming communities due to the lack of safe offline environments for doing so are the same ones most susceptible to being swept up in blunt-force enforcement measures.” This highlights a systemic issue: broad, automated purges, particularly those targeting “sexual content,” disproportionately affect marginalized groups who rely on these platforms for connection and expression. The algorithms, lacking nuance, often fail to distinguish between harmful content and consensual, identity-affirming material.
The Future of Trust and Curation on X
While some suspended accounts have reportedly been reinstated, many more users continue to express intense outrage and frustration over being unexpectedly locked out of accounts they had nurtured for years. The appeals process, for many, has proven unsuccessful, leaving them feeling unheard and helpless. Justin Diego, despite having a premium subscription—which he believed would verify his humanity—is still fighting his suspension, with appeals thus far yielding no positive results. He rightly questions the value of paying for verification if it doesn’t protect against automated misidentification.
The current situation on X raises critical questions about the future of user trust, digital archiving, and the role of automated moderation on social media platforms. If users cannot rely on platforms to protect their legitimate, private digital spaces, where will they go? The loss of years of curated content is not just an inconvenience; it represents a significant blow to personal digital histories and community resources. As platforms increasingly lean on AI to manage vast amounts of data and activity, the challenge of designing systems that can effectively combat malicious actors without inadvertently harming legitimate human users becomes paramount. The “Big Bot Purge” on X serves as a stark reminder that in the quest for a cleaner digital environment, the human cost of blunt-force automation can be substantial.
Conclusion
The recent “Big Bot Purge” on X, while ostensibly aimed at improving platform integrity, has triggered a wave of unintended consequences, particularly for users maintaining “alt” accounts for private adult content consumption. The stories of individuals like Justin Diego, along with countless others, underscore a critical flaw in X’s increasingly automated moderation strategy: the inability of algorithms to accurately distinguish between malicious bots and nuanced human behavior. This indiscriminate approach has led to the catastrophic loss of years of curated content, eroding user trust and disproportionately impacting vulnerable communities, such as queer and trans individuals, who rely on these platforms for safe spaces and identity exploration. As X continues its efforts to combat inauthentic activity, it faces the formidable challenge of refining its enforcement mechanisms to prevent legitimate human users from becoming collateral damage in the ongoing war against bots, thereby preserving the rich, diverse, and often private, tapestry of human expression that defines its platform.
