
Artificial intelligence is reshaping our world, but not all AI uses are legal or ethical. From deepfake scams to algorithm-driven discrimination, here are five AI activities that could land you in serious legal trouble (and the real cases proving it). Stay informed, stay safe, and don’t let curiosity turn into a felony.
1. Generating child sexual abuse material (CSAM)
The crime: Using AI to create, alter, or distribute sexually explicit images of minors.
Real example: In 2023, David Tatum of Charlotte, NC, was sentenced to prison after using AI to transform innocent photos of a 15-year-old (taken 25 years prior) into CSAM. Federal agents identified the victim, now in her 40s, through forensic analysis.
Penalties:
- Federal charges under child pornography laws.
- Up to 20 years in prison per image.
- Lifetime registration as a sex offender.
Why it matters: Even if the victim isn’t real, AI-generated CSAM often trains on real abused children’s images, perpetuating harm.
2. Deepfakes for harassment, fraud, or defamation
The crime: Creating non-consensual porn, impersonating someone, or spreading false information via AI-generated media.
Real example: In 2024, a Texas man sued Macy’s after an AI facial recognition system misidentified him as a robber, leading to wrongful arrest and assault in jail. States like California and Texas have criminalized deepfakes used for revenge porn or election interference.
Penalties:
- Up to 10 years in prison for non-consensual deepfake porn (California).
- Civil lawsuits for defamation, with damages up to $150,000.
- Federal charges if used for financial fraud (e.g., AI voice scams).
3. AI-driven cybercrimes & scams
The crime: Using AI to automate phishing, clone voices, or bypass security systems.
Real example: The FCC banned AI-generated voice cloning in robocalls after scams mimicked President Biden’s voice to manipulate voters. In 2025, the DOJ warned it would seek harsher sentences for AI-aided crimes.
Penalties:
- Wire fraud charges: Up to 20 years in prison.
- Fines up to $1 million for companies using AI for deceptive practices.
- Identity theft penalties: 2–15 years per count.
Related article:
4. Discriminatory AI in hiring, housing, or lending
The crime: Deploying biased algorithms that deny opportunities based on race, gender, or age.
Real example: Lemonade Inc. faced a $4 million lawsuit for using AI chatbots to analyze facial cues in insurance claims, violating privacy laws. Colorado’s 2026 AI Act mandates bias audits for high-risk systems.
Penalties:
- EEOC fines: Up to $300,000 per violation.
- Class-action lawsuits (e.g., housing discrimination).
- Companies must pay for independent audits and restitution.
5. AI-powered identity theft & trade secret theft
The Crime: Using AI to clone identities, steal data, or replicate proprietary systems.
Real example: A former Google engineer was charged in 2024 for stealing AI trade secrets to benefit Chinese companies. The DOJ now treats AI misuse in corporate espionage as a national security threat.
Penalties:
- Economic Espionage Act: Up to 15 years in prison.
- Trade secret theft: Fines up to $5 million for individuals, $10 million for companies.
- Identity theft: 2–5 years plus restitution.
Why should you care?
The U.S. Justice Department is cracking down hard. Deputy AG Lisa Monaco recently announced that AI misuse will trigger stiffer sentences-like how using a gun escalates a robbery charge. Companies are also required to prove they’re managing AI risks in compliance programs.
Protect yourself
- Verify sources: Assume any too-perfect video/voice could be AI-generated.
- Read T&Cs: Apps using facial recognition or voice cloning might sell your data.
- Report misuse: Contact the FTC or local authorities if you suspect AI-driven fraud.
AI is powerful, but with great power comes great legal liability. Stay sharp, stay legal, and don’t let AI turn your life into a true-crime episode.