Two neural networks alghorytms communicate with unhuman cryptographic encryption developed by their own, Google Brain researches discovered. Experiment is designed to find out if neural networks - the elementary unit of AI - can learn how to communicate secretly.

Google Brain, a unit of the search company worked on deep learning research project guide to instruct neural nets to create their own cryptographic algorithms. While not very sophisticated at the moment, this research might set the table for encryption that gets more powerful as hackers try to crack it.

The scientists, Martin Abadi and David G. Andersen, summarized that their experiment is designed to find out if neural networks - the elementary unit of AI - can learn how to communicate secretly. The scientific research paper is accessible in the link.

Scientists programmed two neural networks and named them Bob and Alice. And what occurs when you tell two clever computers (Bob and Alice) to talk to each other in secret and task another AI (Eva) with cracking that conversation? You get one of the coolest experimentations in cryptography they said.

Alice was created to send an encrypted message of 16 zeroes and ones to Bob, which was designed to decode the message. The two bots started with a shared key, a base for the message's encryption. Eve was placed straight in the middle, checking the information and attempting to decrypt it. This created a generative adversarial network among the bots.

To avoid Eve working out the encryption, Alice started changing the message in different ways, and Bob adaptively learned ways to shift his decryption to continue.

The researchers computed Eve's success by how close it got to the accurate message, Alice's by whether Eve's answer was further from the original message than a random guess, and Bob's by whether he met a specific threshold for coming to the right answer.

After several experiments Eve was not able to recognize anything about Bob and Alice's communications. This kind of learning is done over thousands of repetitions by a variety of mathematic weights being changed within the algorithms, so not even the researchers can understand how the encryption was built without having an intensive and time-consuming evaluation.

The designers recognize that their experiment merely shows that neural networks can create their own encryption, not that they're automatically good at it.

"While it seems unbelievable that neural networks would become great at cryptanalysis, they may be very efficient in understanding metadata and in traffic analysis," they wrote.

Even though this was a simple test of whether AI can succeed at creating a form of encryption, it does provoke discussions about the future of security.

What do you think?