- AI can be manipulated by differences in the alpha channel of images, experts warn
- This may pose risks for medical diagnoses and autonomous driving
- Image recognition must adapt to the possibility of this attack
While AI has the ability to analyze images, new research has revealed significant oversight in modern image recognition platforms.
Researchers from the University of Texas at San Antonio (UTSA) claim that the alpha channel, which controls the transparency of images, is often ignored, which could open the door to cyber attacks with potentially dangerous consequences for the media and autonomous driving industries.
The UTSA research team, led by Assistant Professor Guenevere Chen, has developed a proprietary attack method called ‘AlphaDog’ to exploit this overlooked vulnerability in AI systems. The alpha channel, a component of RGBA (red, green, blue, alpha) image data, controls image transparency and plays a crucial role in rendering composite images, and can cause a disconnect between the way humans and AI systems perceive the same image.
Vulnerability to cars, medical imaging and facial recognition
The AlphaDog attack is designed to attack both human and AI systems, albeit in different ways. To humans, the manipulated images may appear relatively normal. However, when processed by AI systems, these images are interpreted differently, leading to incorrect conclusions or decisions.
The researchers generated 6,500 images and tested them in 100 AI models, including 80 open-source systems and 20 cloud-based AI platforms such as ChatGPT. Their tests showed that AlphaDog performs particularly well at targeting grayscale areas in images.
One of the most alarming findings of the study is the vulnerability of AI systems used in autonomous vehicles. Road signs, which often contain grayscale elements, can be easily manipulated using the AlphaDog technique, causing road signs to be misinterpreted, potentially leading to dangerous results.
The research also highlights a critical issue in medical imaging, an area increasingly dependent on AI for diagnostics. X-rays, MRIs and CT scans, which often contain grayscale images, can be manipulated with AlphaDog. In the wrong hands, this vulnerability can lead to misdiagnoses.
Another concern is the potential manipulation of facial recognition systems, which increases the likelihood of security systems being bypassed or individuals being misidentified, opening the door to both privacy concerns and security risks.
The researchers are working with major technology companies, including Google, Amazon and Microsoft, to address vulnerabilities in AI platforms. “AI was created by humans, and the people who wrote the code focused on RGB but left out the alpha channel. In other words, they wrote code for AI models to read image files without the alpha channel,” Chen said.
“That is the vulnerability. The exclusion of the alpha channel on these platforms leads to data poisoning…AI matters. It changes our world, and we have so many concerns,” Chen added.
Via TechXplore