https://www.zdnet.com/article/the-ne...ng-on-the-web/
Excerpted from the link:
"The next big threat to AI might already be lurking on the web
Artificial intelligence experts warn attacks against datasets used to train machine-learning tools are worryingly cheap and could have major consequences."
"Artificial Intelligence (AI) and machine-learning experts are warning against the risk of data-poisoning attacks that can work against the large-scale datasets commonly used to train the deep-learning models in many AI services.
Data poisoning occurs when attackers tamper with the training data used to create deep-learning models. This action means it's possible to affect the decisions that the AI makes in a way that is hard to track.
Also: These experts are racing to protect AI from hackers. Time is running out.
By secretly altering the source information used to train machine-learning algorithms, data-poisoning attacks have the potential to be extremely powerful because the AI will be learning from incorrect data and could make 'wrong' decisions that have significant consequences.
There's currently no evidence of real-world attacks involving the poisoning of web-scale datasets. But now a group of AI and machine-learning researchers from Google, ETH Zurich, NVIDIA, and Robust Intelligence say they've demonstrated the possibility of poisoning attacks that "guarantee" malicious examples will appear in web-scale datasets that are used to train the largest machine-learning models.
"While large deep learning models are resilient to random noise, even minuscule amounts of adversarial noise in training sets (i.e., a poisoning attack) suffices to introduce targeted mistakes in model behavior," the researchers warn."