Google’s AI develops its own encryption method

AI has been slowly developing over the years, with Google researchers having come up with artificial intelligence that can create trippy art. But, trippy art seems to have grown too old for them, and so they have done something potentially alarming. They have created AI that is able to make encrypted messages without the aid or knowledge of humans.

A research paper says Google employees Martin Abadi and David G. Anderson carried out an experiment in which they tried to make three artificial neural network named Eve, Bob, and Alice, send messages to each other that could only be understood by them, using an encryption method that they made.

Abadi and Andersen assigned each AI a task: Alice had to send a secret message that only Bob could read, while Eve would try to figure out how to eavesdrop and decode the message herself. The experiment started with a plain-text message that Alice converted into unreadable gibberish, which Bob could decode using cipher key.

Considering the frequency with which AIs and chatbots decide they’ll wipe out humanity or simply keep people in a “human zoo” you might find this terrifying and start scurrying for the nearest underground bunker. But before you start stockpiling canned goods, you can relax. While it’s true that the machines have managed to have a conversation about which we know nothing, the reality is a little more banal than global takeover and the subjugation of the human populace.

Also read : Samsung Galaxy S8 will have its own AI assistant; will come in two sizes

At first, Alice and Bob were apparently bad at hiding their secrets, but over the course of 15,000 attempts Alice worked out her own encryption strategy and Bob simultaneously figured out how to decrypt it. The message was only 16 bits long, with each bit being a 1 or a 0, so the fact that Eve was only able to guess half of the bits in the message means she was basically just flipping a coin or guessing at random.

The significance of the result comes down to the fact that Alice and Bob were able to develop their encryption method on the fly, simultaneously developing an encryption and decryption strategy that only they could make sense of even while Eve was intercepting their messages.

So while the messages were a far cry from Skynet, they do prove that AIs can successfully invent an encryption method that they have developed themselves and that even other AIs can’t decrypt.

Of course, the personification of these three neural networks oversimplifies things a little bit: Because of the way the machine learning works, even the researchers don’t know what kind of encryption method Alice devised. This makes it hard for the system to be applied to anything at the moment.

Via: New Scientist

follow

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s