Protecting Preteens & Teens in a Converged World
A 12-session course on the intersection of smartphones, social media, and AI—where threats multiply and traditional safety advice fails.
$16.6 billion in losses in 2024. 442% surge in voice phishing attacks. 79% of attacks now operate malware-free through AI-powered social engineering. Your teen faces threats you didn't grow up with—and traditional advice doesn't work anymore.
Enroll Your CommunityUnderstand why smartphone + social internet + AI creates multiplicative (not additive) threats. Recognize the current landscape facing teens in 2024-2025.
Duration: 25 minutes
Your teen's phone contains 11+ sensors collecting data you don't see. Accelerometers infer passwords. Motion sensors map daily routines. AI turns "innocuous" sensors into surveillance tools.
Duration: 25 minutes | Action: Complete security checklist
24% increase in account takeovers. 26 billion credential stuffing attempts monthly. The average person has 146 exposed credentials on dark web marketplaces. AI bots test them all simultaneously.
Duration: 25 minutes | Action: Set up authenticator app
Heavy restrictions correlate with increased risky behavior. The goal: communication over control, negotiation over restriction. Build agreements that preserve trust while providing appropriate oversight.
Duration: 25 minutes | Action: Draft family agreement
Your teen clicked "I Agree" to 50+ legal documents they never read. Meta's 2025 update: all private messages are now AI training data. No opt-out. Retroactive application to existing content.
Duration: 25 minutes | Action: ToS scavenger hunt
Gaming platforms are now the primary attack vector. Roblox receives 10,000+ monthly sextortion reports. Discord serves as migration platform for predators. Voice chat bypasses safety filters.
Duration: 25 minutes | Action: Review teen's gaming contacts
TikTok can create user dependency within 260 videos (35 minutes). Algorithms can push harmful content within 2.6-8 minutes. AI-generated misinformation receives 8% more engagement than human content.
Duration: 25 minutes | Action: Practice STOP method
Why teens don't tell parents: fear of losing device, shame, belief you won't understand. Build disclosure-friendly communication that encourages help-seeking instead of secrecy.
Duration: 25 minutes | Action: Identify crisis resources
AI systems are pattern-matching creatures, not thinking beings. Trained on billions of examples, they predict likely outputs—but hallucinate false information 16-48% of the time.
Duration: 25 minutes | Action: Audit AI use
Humans detect high-quality deepfakes only 24.5% of the time—worse than random chance. Detection is hard. Verification is essential.
Focus: STOP verification method > artifact detection
Duration: 25 minutes | Action: Practice detection
54% of children use AI for homework. Teachers use detection tools. Colleges treat AI plagiarism seriously. Establish clear family policies before problems arise.
Brainstorming, outlining, research starting points, editing assistance, learning support—with verification and disclosure.
Duration: 25 minutes | Action: Draft AI use policy
Review the 3SC framework. Create sustainable weekly routines. Develop adaptability for emerging technologies. Establish ongoing education and support networks.
Graduating a capable digital citizen who doesn't need you hovering. You cannot create perfect safety. You can build a relationship where they come to you with problems.
Duration: 25 minutes | Outcome: Confident, adaptive parenting
This course provides foundations, not comprehensive expertise. Technology evolves constantly—commit to ongoing learning, community support, and adaptive parenting that prioritizes relationship over rules.
Bring This Course to Your Community