Neurodivergent and trauma-sensitive individuals, such as those with autism spectrum disorder, are highly vulnerable to sensory and emotional distress caused by exposure to toxic online content, including hate speech, profanities, and their obfuscated variants. Conventional content filters lack robust multilingual support, fail to adapt to novel zero-day patterns, and provide no guarantees of reliable long-term performance. Neurocognitive predictability is rarely taken into account by such filters. Anxiety may be lessened, and sustained engagement may be supported by filtering systems that anticipate potential emotional triggers, such as toxic content.
In this study, we propose a fully autonomous, real-time web content filtering system designed to protect neurodivergent and trauma-sensitive users. Operating as a zero-configuration transparent proxy, the proposed system performs continuous multilingual analysis using a self-evolving knowledge graph that adaptively identifies both explicit and emerging toxic expressions. The system's core adaptive learning algorithm is designed to ensure stable and reliable operation over time. In the proposed system, a personalized Safe-Point toxicity threshold is maintained for each user. When the threshold is exceeded, harmful content is instantly masked while preserving readability. The primary contribution of this study is a practical, self-learning filtering system that offers vulnerable users a robust, adaptive, and privacy-conscious defense against online verbal aggression.
