Deepfake detectors can be defeated, computer scientists show for the first time

Systems designed to detect deepfakes — videos that manipulate real-life footage via artificial intelligence — can be deceived, computer scientists have shown. Researchers showed detectors can be defeated by inserting inputs called adversarial examples into every video frame. The adversarial examples are slightly manipulated inputs which cause artificial intelligence systems such as machine learning models to make a mistake.

Source: sciencedaily.com

Related posts

Stony coral tissue loss disease is shifting the ecological balance of Caribbean reefs

Newly discovered mechanism of T-cell control can interfere with cancer immunotherapies

When injecting pure spin into chiral materials, direction matters