It’s one of those buzzword terms used increasingly over the years — machine learning (ML).
Conjuring images of science fiction films and sentient robots, machine learning refers to computer systems that can utilize algorithms and statistical models to analyze and process data without direct human instruction. As we’ve developed better, more versatile connected devices, our machines have gotten smarter. Just look at IBM’s Watson supercomputer or even your Apple device’s favorite AI assistant, Siri.
The more advanced our AI, the more security risks emerge.
A new report from Tel Aviv startup Adversa tackles this issue.
Cybersecurity blog The Daily Swig recently spotlighted the new paper, discussing how it asserts that in AI systems, “vulnerabilities can exist in images, audio files, text and other data used to train and run machine learning models.”
This makes it easier for these systems to be manipulated by cybercriminals, with AI having a hard time filtering out “malicious inputs and interactions,” the report reads.
Adversa found that AI machine learning systems that processed visual data were most sensitive to these attacks. “Vision” stood at 65 percent of attacks, followed by “analytics” at 18 percent, “language” at 13 percent and “autonomy” at just 4 percent.
“With the growth of AI, cyberattacks will focus on fooling new visual and conversational interfaces,” according to the report. “Additionally, as AI systems rely on their own learning and decision making, cybercriminals will shift their attention from traditional software workflows to algorithms powering analytical and autonomy capabilities of AI systems.”
The big concern is that, given how relatively new these advanced AI systems are, not enough defenses have been put in place to keep them safe and, most crucially, protect the sensitive data in their charge. At many companies today, there unfortunately are not specific security teams put in place to zero in specifically on AI.
Alex Polyakov, co-founder and CEO of Adversa, told the tech blog the tide is changing. His company and others are now advising other organizations on how to address these machine learning threats.
“The technology itself is a double-edged sword and can serve both good and bad,” Polyakov said.
As with all technology, it’s important that a firm stays abreast of developing industry standards to ensure precious data is handled safely. We might be entering an ever more complex AI world, but as always, safety must come first.