Why won't anyone believe there really are subliminal messages corrupting young digital assistants?
Subject: General Tech | January 31, 2018 - 01:04 PM | Jeremy Hellstrom
Tagged: siri, security, google, Alexa
Some of us are old enough to remember when certain parties were convinced there were subliminal messages in the music which kids listened to which they creatively blamed for a wide variety of behaviour. This belief turned out to be as ridiculous as it sounds, though that doesn't stop it from recurring every couple of generations. There is a somewhat similar and very real issue which The Register talks about here; using a deep neural net they were able to modify songs in such a way that digital assistants such as Echo, Siri and others would hear and execute a command while the humans in the room would only hear a slight distortion in the audio. This particular method is much harder to protect against than the previously discovered vulnerability which was ultrasonic commands which a microphone could pick up but was well beyond the range of human hearing.
You do need to reverse engineer the audio processing software of the digital assistant before you will be able to craft your hidden commands, however once that is done this is a very effective attack.
"The researchers tested a variety of in-song commands delivered directly to Kaldi as audio recordings, such as: "Okay Google, read mail" and "Echo, open the front door." The success rate of these was 100 per cent."
Here is some more Tech News from around the web:
- Unsanitary Firefox gets fix for critical HTML-handling hijack flaw @ The Register
- Microsoft updates Office, OneDrive iOS apps with drag-and-drop, Files support @ Ars Technica
- LibreOffice 6.0 arrives as the open source Office alternative turns seven @ The Inquirer
- Samsung preps for Z-SSD smackdown on Intel Optane drives @ The Register
- Samsung is making ASIC chips for crypto mining to solidify its lead over Intel @ The Inquirer
- Inventing The Microprocessor: The Intel 4004 @ Hack a Day