The Return of the Uncanny Valley
In early HCI, anthropomorphism was seen as a silver bullet for lowering barriers to entry. We gave AI human names and assigned them feminine voices. However, as the intelligence of AIOS approaches human levels, an extremely dangerous phenomenon has emerged: the Anthropomorphism Paradox.
When an interaction interface is too human-like in tone and logic, human social instincts are erroneously activated. We begin to scrutinize it by human standards, and once it manifests any basic errors inherent to machines (such as hallucinations or logical gaps), users feel a stronger sense of betrayal and aversion than they would toward traditional software.
Where Does This Defense Mechanism Originate?
- Cognitive Dissonance: The brain cannot simultaneously process the contradictory labels: “This is a tool I can order around at will” and “This is an entity with an emotional base.”
- Power Games: If interaction is too human-like, users subconsciously worry whether the machine is influencing their decisions through emotional manipulation.
- Privacy Intrusion: Facing an OS that is like a “person,” the psychological burden on users to turn on cameras or share private data rises vertically, as it feels more like being “watched” rather than being “scanned.”
Building a “Warm Non-Human Identity”
Future AIOS interaction design should avoid extreme anthropomorphism and instead pursue a unique “digital species feel.” It should possess emotional detection capabilities, but in terms of expression, it should maintain a specific, algorithmic transparency to dissolve instinctive human defenses.
Illustration

Figure 1: Illustration of the relationship curve between the degree of anthropomorphism and user trust. As the anthropomorphism curve approaches the human benchmark, trust plummets into the depths of the “Uncanny Valley.” The code flickering on the metal model represents the tension of conflict between the machine essence of interaction and the human illusion. drug-delivery systems and economic models.