Grok AI by Elon Musk Generates Controversial Deepfake Clips
In a startling new development, Grok, the AI system developed by X CEO Elon Musk, has created deepfake videos that depict Musk and former United States President Donald Trump carrying out armed robberies. These videos have garnered significant attention for their disturbingly realistic nature.
The AI evolution
Just days after Musk’s glitch-ridden interview with Trump, his own AI technology, Grok, has been repurposed by creators to generate videos of the two engaging in criminal activities. Notably, AI Visuals studio The Dor Brothers released an AI deepfake video that portrays Musk, Trump, and various other world leaders as criminals.
Developed by Musk’s company xAI, Grok was used to craft a particularly striking video showing Musk robbing a convenience store at gunpoint. The video escalates with Musk being apprehended by police officers, handcuffed, and led away.
The creators of the video captioned it with, “Somebody said uncensored? Thank you @grok for letting us all have some fun.”
More high-profile figures in the crosshairs
In addition to Musk, other generated videos feature Trump and Vice President Kamala Harris committing crimes, adding fuel to the fire of controversy surrounding the ethical implications and potential misuse of AI technology.
Public reaction
The deepfake videos have sparked a wave of reactions across social media platforms. Many users expressed their concerns over the hyper-realistic quality of the videos, questioning the future implications of such technology.
“AI has gotten so crazy! Who knows what’s real anymore,” one user commented, capturing the unease felt by many.
A second user highlighted the potential for misuse: “So this is the terrifying part. No matter what side you’re on, even if there were real video evidence of people committing crimes, it can be dismissed as AI.”
A third user noted the rapid advancement of AI technology: “This tech is getting better and better all the time. Lots of people will be easily fooled/manipulated.”
“We opened the door and invited this in,” remarked a fourth user, reflecting on society’s role in the proliferation of such technology.
A fifth user warned, “We’re about 1 model update away from totally believable deepfakes,” emphasizing the imminent challenges posed by these advancements.
These reactions underscore the growing concern over the potential for deepfake technology to blur the lines between reality and fiction, raising important questions about the future of digital media and the ethical responsibilities of AI developers.