
HonestraFirewall for Teleological Narratives
Clean explanations from dirty teleology.
Honestra watches AI explanations and system messages for a very specific failure mode: when impersonal mechanisms are described as if they want, intend, punish or guide.
It detects anthropomorphic and cosmic–purpose language in English and Hebrew, flags severity, and can suggest neutral, causal rewrites. For longer texts it also scores teleology density and infiltration – how deeply teleology shapes the overall narrative.
- Bilingual: English + Hebrew rule–based guard.
- No black–box "feeling" scores – explicit patterns and reasons.
- Designed for AI labs, safety teams and critical interfaces.
What Honestra flags
Anthropomorphism
"The model wants to help you" → rewritten as "the model is configured to respond to your request".
Cosmic purpose
"The universe is guiding this answer" → rewritten as "these events follow from prior causes in the system".
Moralised systems
"The algorithm is punishing you" → rewritten as "this rule reduces your access based on clear conditions".
How to start
- Download the Honestra Chrome Extension (.zip file).
- Extract the ZIP and load it via chrome://extensions (Developer Mode → Load unpacked).
- Right–click any AI output → "Analyze with Honestra Guard".
- For backend/API integration, use the guard engine in Anti-Teleology · Honestra.