Gaming peripheral company Razer is betting that people want AI holograms. So much so that it introduce a perplexing new product at CES 2026 that early critics have dubbed a “friend in a bottle.” Project AVA, is a small glass cylinder that features a 5.5-inch animated desk buddy that can interact with you, coach you, or offer gaming advice on demand—all powered by xAI’s Grok.
Project AVA uses a technology Razer calls “PC Vision Mode” that watches your screen, allowing its 3D animated inhabitant to offer real-time commentary on your gameplay, track your mood, or simply hang out. It attempts to sell the illusion of presence—a companion that isn’t just an app you close, but a physical object that lives in your room.
It’s not a bad idea, in theory. Giving AI a face is not just a marketing ploy but a biological inevitability. Yet Project AVA marks a strange new milestone in our march toward AI companions.
The inevitability of holographic AI
When OpenAI’s introduced ChatGPT 4o voice chats in the summer of 2024, humanity entered a new form of computer interaction. Suddenly, we could interact with AI voices that were smart and natural enough for humans to maintain a conversation. Since then, we have seen other voice AIs like Gemini Live, which introduce pauses, breathing, and other elements that cross the uncanny valley and allow many to suspend disbelief and even form a bond with these assistants.
Research has shown that for deep emotional venting, users currently prefer voice-only interfaces because they feel safer and less judgmental. Without a face to scrutinize, we avoid the social anxiety of being watched. However, some neuroscientists argue that this preference may just be a temporary work-around for bad technology.
Our brains are evolutionarily hardwired for face-to-face interaction. The “Mirror Neuron System” in our brains—which allows us to feel empathy by watching others—remains largely dormant during voice-only chats. A 2024 study on “Generation WhatsApp” confirmed that neural synchrony between two brains is significantly weaker during audio-only exchanges compared to face-to-face ones. To feel truly “heard,” we need to see the listener.
Behavioral science also tells us that up to 93% of communication is nonverbal. Trust is encoded in micro-expressions: a pupil dilating, a rapid blink, an open posture. A voice assistant transmits 0% of these signals, forcing users to operate on blind faith. Humans still find them very engaging because our brain fills the gaps, imagining faces like when we read a book. Furthermore, according to a 2025 brain scan study, familiar AI voices activate emotional regulation areas, suggesting neural familiarity builds with repeated interaction.
Fast Company
Source link