See what your
moderation might
be missing
A Tox Scan analyzes your community data to reveal harmful behavior that may be slipping through your moderation.
Send us a sample of moderated chat or community data.
Aiba’s AI models analyze the conversations and return a short findings summary highlighting patterns, signals, and harmful interactions that may be passing unnoticed.
Moderation always
has blind spots
Even well-maintained moderation systems cannot catch everything.
Online conversations evolve quickly. Slang, context, and coded expressions allow harmful behavior to bypass rules and keyword filters.
As communities grow, these blind spots become harder to see through manual review alone.
Understanding what may be slipping through moderation requires looking at real conversation data at scale.
Run a Tox Scan on your moderation data
A Tox Scan is a quick analysis of your moderation data.
You upload a sample dataset of moderated chat messages, posts, or reports from your platform.
Aiba’s AI models analyze the conversations and produce a short findings summary highlighting harmful behavior that may be going undetected. The same models power the Amanda AI moderation platform
The result is a practical snapshot of how conversations behave inside your community based on real data.
Only a sample dataset is required and the data is deleted after the analysis.
Signals revealed
through deeper analysis

Patterns that traditional moderation struggles to detect
When conversations are analyzed at scale, patterns begin to appear that are easy to miss during day to day moderation.
Misspellings, slang, coded language, and subtle escalation can hide harmful behavior inside otherwise ordinary conversations.
By analyzing the data more deeply, signals begin to emerge that traditional filters and manual review often overlook.
These patterns provide a clearer picture of how conversations behave across the platform.
How the Tox Scan works
Upload a sample dataset
Provide a sample of moderated chat messages, posts, or reports from your platform.
Only a sample dataset is required. Secure transfer options and NDA are available.
AI analysis
Aiba’s AI models analyze the conversations for harmful behavior, language patterns, and interaction signals.
Findings summary
You receive a short findings summary highlighting patterns, signals, and harmful interactions that may be slipping through moderation.
The initial scan and report is provided at no cost.
What you receive

A clear snapshot of how moderation performs in practice
After the analysis you receive a short findings summary based on the dataset you provided.
The report typically highlights:
- Categories of harmful behavior detected
- Examples of conversations that bypassed moderation
- Patterns that appear across conversations
- Signals that may indicate repeat offenders
The goal is to provide a clearer picture of how conversations behave across the platform and where moderation coverage may have gaps.
Secure analysis
with a limited
dataset
Moderation data can be sensitive,
so the Tox Scan is designed to require only a small sample.
- Only a sample dataset is required
- Secure transfer options are available
- The data is analyzed in a controlled environment
- The dataset is deleted after the analysis
NDA agreements can be arranged if required.
Want deeper insights?
The X-RAY report
A deep dive into you social data
For platforms that want a more detailed analysis of community behavior, the X Ray Report offers a deeper review of moderation patterns, risk signals, and platform dynamics.
This analysis examines larger datasets and provides a more comprehensive view of how harmful behavior spreads through conversations.

