Sapling Logo

Profanity Filter

Detect whether a given string contains profanity/vulgar language.

Flag profanity and vulgar language with this profanity detection tool. Classification is performed on a per-word level. Demo in English.

Tags: profanity filter content moderation


Access Profanity Filter API

Contact Us



Profanity will be marked in red. Looking for other ways to measure offensive or toxic content? Contact us.



Toxicity and Bias

Toxicity is an issue with both human and machine-generated text. Within the general category of toxicity are issues such as profanity, threats, and insults. Bias is another issue that—though often unintentional—can cause harm.

Whether a piece of is toxic and/or biased is often based on subtle and contextual cues. Sometimes, models have to be fine-tuned for a particular setting. Contact us to discuss how we can best help with your business use case.