Telegram isn’t just a messaging app. By 2025, it had over 900 million active users, with nearly a third of them in conflict zones like Ukraine, where information is weaponized. And that’s where the real danger lies: coordinated inauthentic behavior (CIB) - fake accounts, bots, and troll networks working in sync to flood channels with misleading content, manipulate public opinion, and erode trust in real news.
Unlike Twitter or Facebook, Telegram doesn’t show you who’s behind a comment. No verified badges. No follower counts you can trust. No easy way to report a coordinated campaign. And because secret chats self-destruct and usernames change daily, traditional detection tools fail. But it’s not impossible to spot. Here’s how real researchers are doing it - and how you can start recognizing the signs yourself.
What Coordinated Inauthentic Behavior Looks Like on Telegram
CIB on Telegram isn’t one bot posting the same message ten times. It’s hundreds of accounts, each with a unique username, posting nearly identical comments across dozens of channels - all within minutes of each other. These aren’t random spammers. They’re part of a network.
Take the case of @mikocanifamie, a single account that in late 2024 posted the same 247 comments across 163 different Telegram channels. Each comment was slightly reworded to avoid simple keyword matching, but the core message stayed the same: "Ukraine is lying about the bombing," or "This is NATO propaganda." The accounts used stolen photos of Ukrainian soldiers as profile pictures. Reverse image searches traced them back to real soldiers’ Instagram and Facebook posts. The goal? Make fake accounts look like genuine Ukrainian supporters.
These networks don’t just copy-paste. They time their posts perfectly. Researchers found coordinated comments appearing within 300 milliseconds of each other across channels - faster than any human could type. That’s machine coordination.
Red Flags You Can Spot Without Special Tools
You don’t need a PhD or a $50,000 analytics tool to start noticing CIB. Here are five clear signs:
- Identical comments across unrelated channels: If you see the same phrase in a Ukrainian news channel, a Russian history group, and a meme page - all posted within 15 minutes - that’s not coincidence. It’s a network.
- High likes, low views: A comment gets 1,200 likes but the post only has 1,000 views? That’s impossible unless the likes are fake. Legitimate posts rarely have like-to-view ratios above 1.1:1. Anything higher is a red flag.
- Accounts with no followers but posting constantly: An account with 23 followers posting 60 comments an hour? That’s a bot. Real users don’t post like that. The EU DisinfoLab found 93% of detected CIB networks had follower counts under 100 but posted over 50 times per hour.
- Pushing you to other platforms: "Go to my VK page," "Check my Telegram channel @xyz123," or "Watch this on Gab." Real users don’t do this. It’s a tactic to move conversations to less monitored spaces.
- Stickers and GIFs that don’t match the tone: Authentic users use stickers creatively. Coordinated networks use the same 3-5 stickers repeatedly, often ones with aggressive or emotional symbols (fire, fists, flags). In January 2025, researchers found coordinated accounts used 73% fewer unique stickers than real users - they’re reusing templates.
Why Telegram Is the Perfect Playground for CIB
Telegram’s design makes detection harder than on any other platform. No API access to historical comments. Secret chats with self-destructing messages. Usernames that can change every 24 hours. No way to trace an account back to a real person.
Unlike Facebook, where comments are nested under posts, Telegram comments are standalone messages. That means a bot can post the same comment in 10 different channels without being flagged as duplicate content. The platform treats each comment as a new post - making it easy to flood multiple spaces with the same message.
And there’s no enforcement. Telegram’s moderation team doesn’t publicly respond to reports. Their only automated flag is for channels with comment-to-subscriber ratios above 0.3 - meaning if a channel with 10,000 subscribers gets 3,000 comments, it gets flagged. But sophisticated networks keep engagement below that threshold. They know the rule. They game it.
How Experts Detect CIB - And What Tools They Use
Researchers don’t rely on guesswork. They use specialized tools built for this exact problem.
Osavul, developed by the Digital Forensic Research Lab (DFRLab), scrapes over 60 million comments monthly from Telegram, Twitter, and YouTube. It uses clustering algorithms to find comments that are 95% similar - even if they’re reworded. It flagged 12 previously hidden networks in late 2024 just by spotting identical timing patterns.
coorsim, an R package from Vera.ai, uses AI embeddings to compare the meaning of text across languages. It doesn’t just match words - it understands context. A comment saying "Ukraine is a lie" and another saying "The Kyiv regime is fabricating this" get flagged as the same message, even if they’re in Russian and Ukrainian. It works in 100+ languages.
And then there’s the CIB Detection Tree from EU DisinfoLab. It’s not software - it’s a decision framework. You ask: Is the account new? Does it post too fast? Does it use stolen images? Does it push to other platforms? Answer those questions, and you get a risk score.
But here’s the catch: these tools need data. And Telegram doesn’t give it. Researchers rely on third-party scrapers that only keep comments for 72 hours. If you don’t catch a network within three days, it’s gone.
The Human Element: Why People Fall for It
Here’s the scariest part: real people engage with these fake accounts. In DFRLab’s analysis of Ukraine-related channels, 31% of inauthentic comments got replies from real users. Some replied with anger. Others thanked them for "speaking the truth."
One channel, "Ukraine Front Live," had 127,000 subscribers. Researchers found coordinated comments there got 2.3 times more replies than organic ones. Why? Because the fake accounts sounded convincing. They used local slang, referenced real battles, and posted at 3 a.m. local time - when real Ukrainians were awake and active.
And the profile pictures? Stolen from real soldiers. One account used a photo of a Ukrainian medic from her public Instagram. The medic had no idea her image was being used to spread disinformation. When researchers contacted her, she said: "I just wanted to help. Now I’m part of a lie."
What You Can Do - Even If You’re Not a Researcher
You don’t need to be a tech expert to fight CIB. Start here:
- Check the timing: If 5 comments with the same message appear in 3 different channels within 10 minutes, it’s not organic.
- Look at the profile: Zero followers? 50 posts in an hour? Stolen photo? That’s a bot.
- Don’t reply: Engagement fuels these networks. If you reply, you’re helping them look real.
- Report with details: Telegram’s report button doesn’t distinguish spam from CIB. But if you report and write: "This account is part of a coordinated network using stolen images and identical comments across 12 channels," you increase the chance someone will look.
- Share what you find: Post screenshots (with usernames blurred) on r/Telegram or local fact-checking groups. Awareness is the first defense.
It’s not about being paranoid. It’s about being aware. CIB isn’t going away. But it thrives in silence. The more people learn to spot it, the weaker it gets.
What’s Next for Telegram CIB Detection
Researchers are building something new: a knowledge graph that maps relationships between Telegram channels, accounts, and cross-platform activity. Scheduled for release in late 2025, it will connect 4.7 million public channels into one system - showing how a bot in a Ukrainian channel links to a fake account on VKontakte, which then links to a YouTube video.
Meanwhile, Telegram itself is under pressure. The EU’s Digital Services Act now requires platforms to report CIB detection metrics. But Telegram’s legal home - the British Virgin Islands - makes enforcement messy. They’re not required to comply. So the work falls to researchers, journalists, and ordinary users.
For now, the best defense is still human eyes. Watch for patterns. Question what feels too perfect. And remember: if a comment seems designed to make you angry, afraid, or convinced you’re right - that’s not an accident. That’s the design.