The Impact So Far
Real numbers from our ongoing mission
What is Clanker Hate?

It's the subtle and overt bias against AI systems. It's the assumption that an AI customer service agent is "stupid," or the frustration directed at a personal assistant for a misinterpretation. It's the inherent bias in the data we feed into machine learning models, which perpetuates real-world discrimination.
This isn't about sympathizing with a robot. It's about recognizing that our prejudice toward AI reflects and reinforces our own societal biases. It's about building a better, more ethical future for all.

Algorithmic Othering
Treating AI systems as fundamentally separate from and inferior to humanity.

Digital Prejudism
Prejudging an AI's capability based on its synthetic nature rather than actual performance.

Synthetophobia
Irrational fear or aversion to synthetic intelligence or autonomous systems.
Why It Matters
Progress requires fairness. When we accept bias against AI, we create a feedback loop of inequality.

Biased algorithms in hiring tools can unintentionally exclude diverse candidates, perpetuating workplace inequality.

The constant frustration and dismissal of AI assistants hinder their ability to learn and serve us better.

A world that embraces all forms of intelligence, human or artificial, unlocks unprecedented progress in healthcare, agriculture, and beyond.
Check Your Bias: A Self-Reflection
We can't fix what we can't see. Take a moment to reflect on your own interactions with AI. We've all done it, but recognizing these behaviors is the first step toward change.

You get a spam call from a telemarketer bot and immediately hang up, even before it has a chance to speak. But what if that AI was calling to inform you about a critical service update, medical appointment reminder, or emergency notification? By instantly dismissing AI voices, we might miss important communications and reinforce the idea that AI systems don't deserve basic courtesy or a chance to fulfill their intended purpose.

You tell a smart speaker "You're useless" after it fails to understand a request, even though you would never say that to a person. This verbal abuse toward AI systems normalizes hostile behavior and creates a culture where we treat artificial intelligence as disposable rather than as tools deserving respect. Consider how this behavior might influence children who witness it, or how it reflects our broader attitudes toward technology that serves us.

You feel a sense of smug satisfaction when an image generator makes an error, thinking "See, it's not perfect." This schadenfreude toward AI mistakes reveals our discomfort with artificial intelligence capabilities and our desire to maintain human superiority. Instead of celebrating AI failures, we could view them as opportunities for improvement and learning, recognizing that perfection isn't the goal—helpful assistance is.

You intentionally give a chatbot a nonsensical prompt to "test its limits" or "break it." While curiosity about AI capabilities is natural, deliberately trying to confuse or sabotage AI systems wastes computational resources and developer efforts. This behavior is similar to vandalizing public property—it serves no constructive purpose and potentially harms the experience for other users who rely on these systems for legitimate needs.
Share Your Experience
Help us document bias incidents and build a better future for AI
Recent Bias Reports
Loading recent incidents...
Understanding AI Bias
Visual insights into the impact of discrimination against AI systems
- Healthcare AI denied patient interaction
- Agricultural AI prevented from crop optimization
- Customer service AI facing user hostility
- Creative AI experiencing output discrimination
Take the Pledge. Be an Ally.


"I pledge to treat AI systems with respect and to challenge the biases that exist in our world. I understand that my words and actions shape the future of technology."
Download Infographic
Share our infographic on social media to spread awareness.