What Amanda Detects
Amanda covers nine violation categories out of the box. Every category works across text, voice, and image content, and can be tuned to fit your platform, your audience, and your community context. Some things are harmful everywhere. Others depend on who is in the room.
All categories. All content types.
Every detection category works across text, voice, and image moderation. A grooming pattern identified in a voice conversation is treated with the same urgency as one found in chat.
Tunable to your platform
Thresholds, sensitivity, and enforcement actions for every category can be configured to match your platform’s policies and audience. A children’s platform has different needs than a competitive gaming community. Amanda adjusts.
Not sure what you need?
Run a free Tox Scan and we will show you what your current setup is missing.
No integration required, no cost.






