Researchers have demonstrated how a projected gradient descent attack is able to fool medical imaging systems into seeing things which are not there.  A PGD attack degrades pixels in an image to convince an image recognition tool into falsely identifying the presence of something in an image, in this case medical scanners.  The researchers were successful in fooling three tests, a retina scan, an x-ray and a dermatological scan for cancerous moles; regardless of their access level on the scanner itself.  Take a look over at The Register for more information on this specific attack as well as the general vulnerability of image recognition software.

"Medical AI systems are particularly vulnerable to attacks and have been overlooked in security research, a new study suggests."

Here is some more Tech News from around the web:

Tech Talk