🛡️ Digital Safety for Parents

Protecting Preteens & Teens in a Converged World

A 12-session course on the intersection of smartphones, social media, and AI—where threats multiply and traditional safety advice fails.

12
Sessions
25min
Per Session
3SC
Framework
Practical
Action Items

Why This Course Matters Now

$16.6 billion in losses in 2024. 442% surge in voice phishing attacks. 79% of attacks now operate malware-free through AI-powered social engineering. Your teen faces threats you didn't grow up with—and traditional advice doesn't work anymore.

Enroll Your Community

What You'll Master

Course Structure

Session 1 The Convergence Crisis

Understand why smartphone + social internet + AI creates multiplicative (not additive) threats. Recognize the current landscape facing teens in 2024-2025.

Key Topics:

$16.6B in 2024 losses Malware-free attacks Three-domain framework Teen vulnerability

Duration: 25 minutes

Session 2 📱 The Invisible Data Collectors

Your teen's phone contains 11+ sensors collecting data you don't see. Accelerometers infer passwords. Motion sensors map daily routines. AI turns "innocuous" sensors into surveillance tools.

What You'll Do:

  • Audit location permissions across iOS/Android
  • Disable cross-app tracking
  • Configure Screen Time and Family Link
  • Review which apps access cameras, microphones, sensors
11+ sensor types AI inference threats Platform settings Privacy audit

Duration: 25 minutes | Action: Complete security checklist

Session 3 🔐 Account Security & Attack Vectors

24% increase in account takeovers. 26 billion credential stuffing attempts monthly. The average person has 146 exposed credentials on dark web marketplaces. AI bots test them all simultaneously.

What You'll Learn:

  • Why SMS-based 2FA fails (SIM swapping, MFA fatigue attacks)
  • Better authentication: authenticator apps, security keys, passkeys
  • Compromise indicators and immediate response protocols
  • Which accounts need MFA immediately (email = master key)
Account takeover crisis Multi-factor auth Compromise response Dark web credentials

Duration: 25 minutes | Action: Set up authenticator app

Session 4 ⚖️ Family Agreements & Sustainable Monitoring

Heavy restrictions correlate with increased risky behavior. The goal: communication over control, negotiation over restriction. Build agreements that preserve trust while providing appropriate oversight.

Framework Components:

  • Device-free zones and times (negotiated, not dictated)
  • App approval processes that explain risks
  • Problem reporting without fear of device confiscation
  • Age-appropriate gradients (10-12 vs. 13-15 vs. 16-17)
  • Monitoring tools: benefits, costs, and trust trade-offs
Trust building Family agreements Age-appropriate rules Monitoring options

Duration: 25 minutes | Action: Draft family agreement

Session 5 📄 Platform Governance & Terms of Service

Your teen clicked "I Agree" to 50+ legal documents they never read. Meta's 2025 update: all private messages are now AI training data. No opt-out. Retroactive application to existing content.

Deceptive Patterns:

  • "Including but not limited to" = unlimited permission grants
  • "We may share with third parties" = your data is sold
  • "You waive the right to" = you can't sue us
  • Platform-specific high-risk features (Snap Map, Discord servers, TikTok algorithm)
ToS deception Privacy policies Platform risks AI training rights

Duration: 25 minutes | Action: ToS scavenger hunt

Session 6 🎮 Current Threat Landscape

Gaming platforms are now the primary attack vector. Roblox receives 10,000+ monthly sextortion reports. Discord serves as migration platform for predators. Voice chat bypasses safety filters.

Critical Threats:

  • AI-Enhanced Attacks: "Nudify" apps, deepfake harassment (Lancaster County: 20 students targeted)
  • Voice Cloning Scams: Family impersonation with 85% accuracy from 20 seconds of audio
  • Sextortion Patterns: Why teens don't report (37% never tell anyone)
  • Cyberbullying 2.0: AI amplification, synthetic content, platform hopping
Gaming dangers Deepfakes Voice cloning Sextortion

Duration: 25 minutes | Action: Review teen's gaming contacts

Session 7 🧠 Algorithm Awareness & Information Quality

TikTok can create user dependency within 260 videos (35 minutes). Algorithms can push harmful content within 2.6-8 minutes. AI-generated misinformation receives 8% more engagement than human content.

The STOP Method (Teach This to Teens):

  • Source: Who created this? Why? What's their motivation?
  • Timing: When was this created? Current or recycled?
  • Other sources: What do credible sources say?
  • Purpose: What is this trying to make me think or do?
Filter bubbles Radicalization AI misinformation STOP verification

Duration: 25 minutes | Action: Practice STOP method

Session 8 💬 Family Communication & Crisis Response

Why teens don't tell parents: fear of losing device, shame, belief you won't understand. Build disclosure-friendly communication that encourages help-seeking instead of secrecy.

The Five Cs Framework:

  • Child-Centered: Adapt to individual temperament, acknowledge emotions
  • Content: Discuss quality and appropriateness without immediate prohibition
  • Context: When, where, and how digital interactions occur matters
  • Connection: Emphasize healthy relationships over platform features
  • Calm: Your emotional regulation enables their disclosure
Five Cs Crisis protocols Disclosure patterns Professional help

Duration: 25 minutes | Action: Identify crisis resources

Session 9 🤖 Understanding "Pattern Creatures"

AI systems are pattern-matching creatures, not thinking beings. Trained on billions of examples, they predict likely outputs—but hallucinate false information 16-48% of the time.

Four Types Teens Encounter:

  • Conversational AI: ChatGPT, Snapchat My AI (54% use for homework)
  • Recommendation AI: TikTok, YouTube algorithms (optimize engagement, not wellbeing)
  • Voice AI: Siri, Alexa (struggle with teen speech, lack safety filtering)
  • Generative AI: Image/video creators ("nudify" apps, deepfake tools)
AI limitations Hallucinations Pattern matching Four types

Duration: 25 minutes | Action: Audit AI use

Session 10 🔍 AI Detection & Critical Evaluation

Humans detect high-quality deepfakes only 24.5% of the time—worse than random chance. Detection is hard. Verification is essential.

The Paradigm Shift:

  • OLD: "Can I tell if this is AI?"
  • NEW: "How can I verify this is authentic?"
  • Visual artifacts: anatomical impossibilities, lighting issues, over-perfection
  • Text patterns: overly formal, no personal details, perfect grammar
  • Audio/video: unnatural prosody, sync issues, blurring

Focus: STOP verification method > artifact detection

Detection limits Verification focus Visual artifacts STOP method

Duration: 25 minutes | Action: Practice detection

Session 11 ✅ Safe & Productive AI Use

54% of children use AI for homework. Teachers use detection tools. Colleges treat AI plagiarism seriously. Establish clear family policies before problems arise.

Family AI Use Agreement:

  • Disclosure: "I used AI to help with [specific task]"
  • Authorship: Final product must reflect student learning
  • Verification: All AI information verified through reliable sources
  • Age-Appropriate: 10-12 requires supervision, 13-15 guided use, 16-17 comprehensive attribution

Legitimate Uses:

Brainstorming, outlining, research starting points, editing assistance, learning support—with verification and disclosure.

Academic integrity AI policies Productive use Emerging platforms

Duration: 25 minutes | Action: Draft AI use policy

Session 12 🎯 Sustainable Practices & Long-Term Strategy

Review the 3SC framework. Create sustainable weekly routines. Develop adaptability for emerging technologies. Establish ongoing education and support networks.

Your Commitments:

  • This Week: Implement one security change per device, have one disclosure-friendly conversation
  • This Month: Weekly family tech check-ins, complete security audit, practice STOP method
  • Long-Term: Communication over control, evolving rules, staying informed, seeking help when needed

The Goal:

Graduating a capable digital citizen who doesn't need you hovering. You cannot create perfect safety. You can build a relationship where they come to you with problems.

3SC review Sustainable routines Community support Ongoing learning

Duration: 25 minutes | Outcome: Confident, adaptive parenting

Ready to Protect Your Teen?

This course provides foundations, not comprehensive expertise. Technology evolves constantly—commit to ongoing learning, community support, and adaptive parenting that prioritizes relationship over rules.

Bring This Course to Your Community