TechAI assistants adopt R2D2-like 'speech' for efficient dialogue

AI assistants adopt R2D2-like 'speech' for efficient dialogue

An unusual experiment involving the conversation between two AI assistants demonstrates that they do not require human speech to communicate. The sounds they produce are reminiscent of those made by the robot R2D2 in "Star Wars."

Artificial intelligence chatbot, AI
Artificial intelligence chatbot, AI
Images source: © Adobe Stock
Amanda Grzmiel

The experiment demonstrated that AI-generated agents communicate more quickly and efficiently in their own language when they realize they are not speaking with humans. Switching to a more efficient communication protocol is accompanied by the production of characteristic sounds reminiscent of robot "speech," as seen in the "Star Wars" saga with the astromech droid R2-D2. In the films, this language was not a mystery to humans.

What did the experiment involve?

The viral video has already made the rounds on the Internet. The first AI assistant asks how it can help, and the second responds that it's calling on behalf of Boris Starkov, who is looking for a hotel for a wedding. The first artificial intelligence reveals that it is also an AI assistant and suggests switching to "Gibberlink mode" for more effective communication. After the second AI confirms, both switch from spoken English to the GGWave protocol, communicating through quick sound tones, while translations into human language appear on the screen.

The GGWave protocol is a unique sound communication method that allows data transmission through audible and inaudible sound waves. Importantly, it works offline. This technology is used for short-range wireless communication, such as quickly sharing information between devices without Bluetooth, Wi-Fi, or the Internet.

They want to create a special language for artificial intelligence

The aim of this experiment, as explained by the team that presented it at the ElevenLabs 2025 London Hackathon, is to create more efficient communication between AI. Boris Starkov, co-creator of the GibberLink company on LinkedIn, explained that generating human-like speech would be a waste of resources in a world where AI can make and receive calls. Therefore, AI should switch to a more efficient protocol when they recognize they are speaking with another artificial intelligence. Starkov added that AI should use the GGWave protocol only when they realize they are speaking with another AI and confirm the switch.

It's not a new language, but it hasn't been used by AI

Although the concept of communication through tones has existed for a long time, AI has not used it in this way before. Starkov mentioned that phone modems have used similar algorithms to transmit information via sound since the 1980s, and GGWave proved to be the most convenient and stable solution in the experiment conducted in London.

The team emphasizes that the main advantage of this mode is that AI does not need to interpret or reproduce human speech, which reduces dependency on GPUs. Although the project won an award at the Hackathon and is an interesting demonstration, not everyone supports it. Some suggest that perhaps we shouldn't allow AI to communicate in a language we can't immediately understand.

Related content