Recent research from North Carolina State University reveals that artificial intelligence (AI) systems are more susceptible to adversarial attacks than initially thought. Adversarial attacks involve manipulating data to confuse AI systems, leading to erroneous outcomes and making them vulnerable to manipulation. To evaluate the vulnerabilities of deep neural networks, the research team created QuadAttacK, a software designed to test AI systems for susceptibility to these attacks. The study discovered that adversarial vulnerabilities are widespread in commonly used deep neural networks, underscoring the urgent need for enhanced AI robustness against such attacks. Tianfu Wu, co-author of the paper and associate professor of electrical and computer engineering, emphasizes a key finding: “These vulnerabilities are much more common than previously thought. Attackers can exploit these weaknesses to manipulate the AI system, interpreting the data in any way they desire.”
The potential risks associated with adversarial attacks are particularly significant in applications that could impact human lives. For instance, attackers could deceive an autonomous vehicle’s AI system by manipulating road signs, causing the vehicle to misinterpret its surroundings and potentially leading to accidents. In their evaluation of four widely-used networks using QuadAttacK, researchers were surprised to find that all four demonstrated a high susceptibility to adversarial attacks. They were able to fine-tune the attacks, altering the AI’s perception. The research team has made QuadAttacK publicly available, enabling other researchers to assess their systems for vulnerabilities.
Moving forward, the researchers aim to identify ways to minimize these vulnerabilities and enhance the robustness of AI systems against adversarial attacks. The ongoing work is expected to contribute to the development of more secure and reliable AI technologies. This study serves as a critical reminder of the imperative to address vulnerabilities in AI systems, especially in applications where incorrect decisions can lead to severe consequences. By understanding and mitigating these risks, researchers can contribute to ensuring the trustworthy and safe deployment of AI across various sectors.