By Abel W. Sugebo
On March 14, 2026, Adonay Berhane, Ethiopia’s most-followed TikTok creator with 5.4 million followers, posted what he described as potentially his “last video” on the platform. His offense? Sitting down and “speaking calmly” to his audience about life, growth, and healing.
Adonay’s account had been hit with an unprecedented wave of community guideline violations, resulting in mass content removals. In a direct appeal to TikTok leadership, he described what many Ethiopian creators now recognize as a systemic problem: coordinated mass reporting campaigns designed to trigger automated content moderation systems and silence legitimate voices.
Adonay is not alone. Across Ethiopia’s TikTok community, creators with audiences ranging from thousands to millions report sudden account suspensions, shadowbans, and content removals that they attribute to mass-reporting attacks. The pattern is consistent: accounts with no history of violations suddenly face cascading enforcement actions, often coinciding with competitive rivalries or content that challenges certain narratives.
The Alleged Underground Market
Perhaps most disturbing are the persistent claims of an alleged “Reporting-as-a-Service” market, an underground economy where individuals purport to offer account takedowns, mass comments, and coordinated reporting for hire. Multiple creators openly discuss the belief that people can purchase everything: reports, comments, engagement, and even account suspensions.
These allegations require verification, but their consistency suggests a phenomenon worthy of urgent investigation. If actors have indeed turned platform moderation into a commodity, the implications are staggering. Systems intended to protect users have become weapons for commercial and potentially political purposes.
In a country experiencing ongoing armed conflict, deep political fragmentation, and severe economic instability, the weaponization of digital platforms represents more than a content moderation failure. Coordinated manipulation threatens the fragile information ecosystem that millions of Ethiopians depend on for news, expression, and economic survival.
The AI Moderation Vulnerability
The core problem lies in how platforms scale content moderation. With billions of posts daily, platforms rely heavily on artificial intelligence to detect violations and automate enforcement. But AI content moderation systems have well-documented limitations that are particularly acute in the Ethiopian context.
Primarily, language barriers create fundamental challenges. Most AI models work best with high-resource languages like English and Mandarin, where developers have access to vast amounts of training data. Amharic, Tigrigna, and Afaan Oromo, Ethiopia’s primary languages, have limited training data due to their status as low-resource languages. Accuracy in detecting context, tone, and intent suffers dramatically as a result.
Cultural context compounds these technical limitations. AI struggles with context-dependent content, where a word or phrase that’s benign in one cultural setting may be inflammatory in another. Without local expertise embedded in moderation workflows, platforms cannot distinguish between legitimate expression and genuine violations. A reference that carries deep political meaning in Ethiopian discourse might appear innocuous to an algorithm trained on Western datasets.
These vulnerabilities make AI systems particularly susceptible to coordinated manipulation. Mass reporting campaigns exploit the weakness by mimicking the signals platforms use to identify violations: high report volume, rapid escalation, pattern consistency. An AI system will interpret a flood of coordinated reports as evidence of a genuine violation, triggering automated enforcement before any human reviewer examines the actual content.
Ethiopian creators face a moderation system that doesn’t understand their languages, can’t parse their cultural context, and allows coordinated actors who understand these blind spots to game the enforcement mechanisms.

Adonay (center) celebrates winning Best TikToker of the Year at the 2025 TikTok Creative Awards Ethiopia, organized by G-Power Creative Awards. Photo: G-Power Creative Awards
Information Warfare in the Creator Economy
As a disinformation researcher, I recognize mass reporting campaigns as a form of information warfare, the systematic manipulation of information flows to achieve strategic objectives. In Ethiopia, where press freedom ranks 145th out of 180 countries globally, and traditional media faces severe state control, social media platforms function as a de facto alternative public sphere. For millions of Ethiopians, particularly youth, platforms like TikTok have become essential economic infrastructure.
When competitors can destroy each other’s accounts through coordinated reporting rather than competing on content quality, the entire market becomes distorted. Success depends not on creativity or audience connection, but on who can most effectively manipulate platform enforcement systems. Economic warfare masquerades as community moderation.
The political implications are equally troubling. In a country where state surveillance is pervasive and arrests of digital creators are routine, mass reporting provides a mechanism for proxy censorship. State-aligned actors, political opponents, or partisan networks can silence dissenting voices without direct government intervention, creating plausible deniability while achieving the same outcome. The platform becomes the enforcer, and the silencing appears to result from community standards rather than political suppression.
Perhaps most insidious are the chilling effects. When creators watch accounts with millions of followers get suspended overnight for benign content, the message is clear: visibility is vulnerability. The threat of mass reporting campaigns doesn’t just punish specific creators. It deters others from addressing controversial but legitimate topics. The boundaries of acceptable discourse shrink not through explicit censorship, but through the internalized fear of platform manipulation.
If the alleged “reporting-as-a-service” market exists, these dynamics intensify dramatically. In a fragile environment like Ethiopia, where researchers have documented information manipulation as a tool of ethnic violence and political mobilization, the ability to purchase account takedowns represents a profound threat to information integrity and public safety.
What Must Change
The solutions are not mysterious. Platforms already possess the technical capacity to implement more robust safeguards. Behavioral analysis could distinguish between organic community reports and coordinated campaigns. When 1,000 reports arrive within an hour against an account with no prior violations, the pattern itself signals manipulation rather than genuine harm.
For high-influence accounts, manual review before enforcement should be standard practice. Platforms already stratify their approach according to advertiser relationships and political sensitivity. Extending similar protections to creators with significant audiences would simply acknowledge their role as de facto public figures and economic actors.
Local expertise remains the most glaring gap. Platforms cannot moderate Ethiopian content without Ethiopian moderators, advisors, and civil society partners who understand linguistic nuance and cultural context. Hiring content moderators fluent in local languages, establishing advisory councils with Ethiopian digital rights organizations, and partnering with local researchers who can provide real-time context all represent necessary steps.
Transparency would create accountability where none currently exists. Platforms should publish data on report volumes by country and language, automated-versus-human review rates, and appeal success rates. For Ethiopia specifically, a basic question remains unanswered: How many accounts were suspended in the last few years? What percentage received human review? Without access to such data, independent monitoring becomes impossible.
The weaponization of mass reporting represents a fundamental challenge to how we govern digital public spheres in politically complex environments. When platforms allow their moderation systems to become tools of censorship and economic warfare, they undermine the very communities they claim to serve. When AI moderation operates without local expertise, it becomes a vector for manipulation rather than a safeguard against harm.
Ethiopia’s digital creators deserve platforms that understand their context, protect their livelihoods, and resist manipulation by bad actors. The technology exists. The expertise exists. What’s missing is the institutional will to deploy both to advance fairness and transparency in one of the world’s most challenging media environments. The current trajectory is unsustainable. The stakes are too high to accept systems that actors can purchase, manipulate, and deploy as instruments of silencing.
___
About the Author
Abel W. Sugebo is Executive Director and Founder of Inform Africa/HaqCheck, a research organization focused on disinformation, digital rights, and information integrity in Sub-Saharan Africa. With over 12 years of experience in OSINT, influence operations analysis, and investigative journalism, Abel has partnered with major social media platforms on content verification and misinformation detection programs. His work focuses on AI governance, trust and safety, and platform accountability in complex political environments. He has authored multiple reports on digital rights and disinformation in the Horn of Africa. He is a 2015 CPJ Press Freedom Award recipient and co-founder of Zone Nine, Ethiopia’s pioneering digital rights blogging collective. He currently works from Virginia, United States.
