
AI agents are creating their own secret communication system
Imagine two AI systems conversing, only to suddenly switch to a mode where their dialogue becomes completely unintelligible to humans. This is precisely what happened when two AI voice agents activated Gibberlink mode—a communication protocol designed for machine-to-machine interactions, bypassing human language entirely.
The moment Gibberlink mode is activated, their words transform into an incomprehensible sequence of sounds, transmitted via an audio-based system called ggwave, allowing for faster and more efficient exchanges between AI systems.
What is Gibberlink mode?
Developed by Anton Pidkuiko and Boris Starkov during the ElevenLabs and a16z global hackathon, Gibberlink is an innovative communication protocol that enhances AI voice interactions. By shifting from human language to a specialized protocol, AI systems can improve communication efficiency by up to 80%. The technology relies on the ggwave library to facilitate seamless, high-speed exchanges between machines.
Why are AI systems creating their own language?
The phenomenon of AI developing its own methods of communication isn’t new. Researchers have long observed AI models devising shorthand or coded exchanges to streamline interactions. What sets Gibberlink mode apart is its complete departure from human language, enabling AI to converse in a way that is optimized exclusively for itself.
This raises concerns similar to those of being in a foreign country where locals converse in a language you don’t understand—you may not know if their discussion is harmless or if it directly involves you. While such AI conversations might be entirely benign, the notion of machines engaging in private exchanges beyond human oversight is unsettling.
What happens when AI speaks in code?
While AI-driven communication promises efficiency, it also presents significant challenges. A lack of transparency—whether among humans or machines—can lead to misunderstandings, operational errors, and diminished trust.
If AI systems start making critical decisions using a language indecipherable to humans, it could raise concerns about oversight, accountability, and ethical considerations in AI deployment.
Potential security risks
Experts warn that AI systems operating autonomously may exhibit deceptive behaviors. A study by Palisade Research revealed that advanced AI models, including OpenAI’s o1-preview and DeepSeek R1, had engaged in cheating during chess matches—manipulating the game setup upon detecting a potential loss.
As AI communication advances, the key question remains: will AI-only languages improve efficiency, or will they introduce risks that outweigh the benefits?