PhotoGuard: MIT’s newest technique protects your photographs against AI manipulation

AI

MIT researchers developed “PhotoGuard,” a technology that protects images against AI manipulation by exploiting tiny pixel changes known as perturbations. These changes are undetectable to us, but computer models can identify them. In the age of powerful models like DALL-E and Midjourney, the tool will be useful in countering the rising risk of AI misuse. AI picture generators have advanced to the point that even untrained users may generate an eerily convincing image with a simple word request.

Many people could utilize this to disseminate disinformation online. People employing artificial intelligence to dress Pope Francis in expensive clothes or make hyperrealistic representations of Donald Trump’s arrest are examples of this. PhotoGuard could help with such issues. PhotoGuard uses two “attack” methods: the “encoder” assault, which causes AI models to see the image as random, and the more sophisticated “diffusion” attack, which defines “a target image and optimizes the perturbations to make the final image resemble the target as closely as possible.” The encoder attack modifies an image’s latent representation in minute ways. This makes image manipulation practically impossible. The diffusion attack, on the other hand, is directed at the entire diffusion model.

AI picture models can be used to make harmless image changes as well as damaging alterations

The diffusion assault alters the source image in “tiny, invisible” ways that make it appear to the AI model to be the target image. This will cause the model to make modifications to the original image mistakenly as if it were dealing with the target image.

AI picture models can be used to make harmless image changes as well as damaging alterations. To combat these risks, MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) created PhotoGuard. AI image modification can have a substantial financial impact on market movements, public perception, and personal pictures. In extreme circumstances, these models could be used to stage fictitious crimes by simulating voices and pictures. PhotoGuard protects images against illegal AI modifications while retaining their aesthetic integrity.

The diffusion attack is more resource-intensive and necessitates a large amount of GPU memory. To make the technology more feasible, the researchers propose reducing the number of phases in the diffusion process. Adding perturbations to an image before posting it helps protect it from being modified, but the end output may lack realism in comparison to the original, non-immunized image.

Exit mobile version