Study: How Monetization Policy Changes Affect Student Creators’ Topic Choices
researchYouTubestudent project

Study: How Monetization Policy Changes Affect Student Creators’ Topic Choices

UUnknown
2026-02-21
9 min read
Advertisement

A student research plan to measure whether YouTube’s 2026 monetization changes shift creators toward sensitive-but-educational topics, with templates.

Hook: Why this study matters to students, teachers, and creators

Students and teachers I coach tell me the same thing: the platform keeps changing, and it’s impossible to know what creators will choose next. If YouTube’s monetization rules shift, do creators start covering more sensitive-but-educational topics like mental health, domestic abuse, or reproductive health — or do they still avoid them? This research-style project gives you a clear, low-risk way to measure the monetization effects on creator behavior and topic selection in 2026.

The evolution of the question in 2026

In January 2026 YouTube revised its monetization policy to allow full monetization of nongraphic videos on sensitive issues such as abortion, self-harm, suicide, and domestic and sexual abuse. This change, widely reported across creator press in early 2026, creates a natural experiment opportunity that didn’t exist in the same way in 2024 or 2025. At the same time, platform trends — expanding shorts monetization, new creator analytics dashboards, and advances in automated content moderation (late 2025–early 2026) — alter incentives for topic choice.

Project aim and research question

Aim: To measure whether YouTube’s 2026 monetization policy changes led creators to publish more videos on sensitive-but-educational topics.

Primary research question: Did creators increase the frequency and visibility of educational videos on sensitive topics after the monetization policy change?

Secondary questions

  • Which creator categories (health educators, news channels, personal vloggers) changed their topic selection most?
  • Did changes differ between long-form videos and short-form content?
  • Did audience engagement metrics (view count, watch time, comments) change for those topics after monetization rules were updated?

Why this is a great student study

This is an ideal capstone or course project because it uses publicly available data, combines qualitative coding with quantitative analysis, teaches reproducible methods, and ties to real-world policy effects. You’ll learn API use, topic classification (manual and automated), time-series analysis, and research ethics — all highly transferable skills for students, teachers, and lifelong learners.

High-level research design

Use a quasi-experimental, mixed-methods approach: combine an interrupted time series (ITS) and difference-in-differences (DiD) framework with a human-coded sample for validation.

Core components

  1. Define treatment and control groups. Treatment = channels that were historically demonetized or flagged for sensitive topics. Control = channels with stable monetization and similar audience size but unrelated topics (e.g., educational tech, math tutorials).
  2. Define the pre and post windows. Example: 6 months pre-policy (July–Dec 2025) and 6 months post-policy (Jan–Jun 2026). Adapt to data availability.
  3. Operationalize outcome metrics. Frequency of uploads on target topics, proportion of channel output that is sensitive educational content, views per video, average watch time, and engagement rates.
  4. Collect both metadata and content. Use YouTube Data API for metadata; sample and download transcripts (where available) for content analysis.
  5. Combine automated topic classification with manual coding. Validate automated labels with a human-coded subset and calculate interrater reliability.

Step-by-step data-collection plan

1. Define sensitive-but-educational topics and build a codebook

Be explicit about what counts as sensitive-but-educational. Example categories:

  • Mental health education (depression, suicide prevention, therapy explainers)
  • Reproductive health (abortion, contraception, pregnancy care)
  • Domestic and sexual abuse education and resources
  • Substance use information (harm reduction vs. glamorization)

For each category include inclusion and exclusion rules. Include examples and edge cases in the codebook. Keep it concise and test it on 30 pilot videos.

2. Select channels and sampling strategy

Choose a stratified sample of channels to reduce selection bias:

  • Health education channels (n = 50)
  • News and commentary channels (n = 50)
  • Personal vloggers and creators known to cover sensitive topics (n = 50)
  • Control educational channels (math, coding, language) (n = 50)

Stratify by subscriber size (micro: <100k, mid: 100k–1M, macro: >1M) so you can analyze whether audience scale mediates the effect.

3. Use the right tools

Tools and datasets to consider in 2026:

  • YouTube Data API v3 for video metadata and basic stats. Watch for API quota limits and recent API updates in late 2025.
  • Google BigQuery public YouTube datasets when available for large-scale queries (useful for channel-level trends).
  • Transcript extraction via the API or automated speech-to-text (for languages supported). Be mindful of transcription errors on sensitive topics.
  • NLP tools for automated classification: 2026 transformer models (fine-tuned BERT/DistilBERT or lightweight instruction-tuned models) or BERTopic for unsupervised topic clustering.
  • Statistical tools — R or Python for DiD and ITS (packages like statsmodels, causalimpact).

4. Data schema (minimum fields)

  • video_id, channel_id, channel_name
  • published_at (UTC)
  • title, description, tags
  • transcript_text (when available)
  • view_count, like_count, comment_count, watch_time
  • video_duration, format (short vs long)
  • manual_label_topic (from codebook), automated_label_topic, rater_id

Analysis plan

Interrupted time series (ITS)

Test whether the frequency of sensitive-topic uploads changed immediately and over time after the policy change. ITS estimates level and slope changes.

Difference-in-differences (DiD)

Compare treatment channels to controls using DiD. Model example:

Outcome_it = β0 + β1Post_t + β2Treatment_i + β3(Post_t × Treatment_i) + FE_channel + FE_time + ε_it

β3 is the DiD estimator measuring the policy effect on topic selection or engagement.

Robustness checks

  • Placebo tests using fake policy dates
  • Event-study graphs to visualize dynamic effects
  • Sensitivity to different pre/post windows (3, 6, 12 months)
  • Subgroup analyses by channel size and format (short vs long)

Automated classification + human validation

Use an automated classifier to label the full dataset, then validate on a randomly selected 10–20% human-coded sample. Calculate Cohen’s kappa to measure agreement and report precision/recall for the automated approach.

Sample size and power (practical guidance)

For DiD with panel data, simulate expected effect sizes. As a rule of thumb for student work: aim for at least 100 channels with 12 months of weekly observations (≈ 5200 channel-week observations) to detect moderate effects. If you focus on video-level outcomes, a sample of 2000–5000 videos across windows usually yields sufficient power for engagement metrics.

Public YouTube data is generally allowed for research, but respect creators’ rights and privacy. For student projects planning to publish results beyond classroom use, consult your institution’s IRB. Important steps:

  • Do not collect or distribute identifiable personal data beyond public channel names and public video IDs.
  • Mask or aggregate sensitive content when presenting examples.
  • If you contact creators for interviews or surveys, obtain informed consent and document your process.

Limitations and confounders to look for

No observational study is perfect. Key threats to validity:

  • Algorithmic recommendation changes happening simultaneously (e.g., late-2025 algorithm updates favoring shorts).
  • Creator behavior reacting to news cycles instead of policy (spikes around world events).
  • Changes in creator revenue diversification (patreon, merch, paid courses) that also influence topic choice.
  • Measurement error in automated topic classification and transcription inaccuracies on sensitive topics.

Data reporting and reproducibility

Document your data pipeline with reproducible notebooks. Share code and aggregated data (not raw transcripts if privacy concerns exist) on a repository with a clear README. Include your codebook, sample selection criteria, and pre-registered analysis plan if possible. Pre-registration strengthens credibility and is excellent practice for student researchers entering 2026 academic environments.

Practical, actionable templates for students

Here are ready-to-use templates you can copy and adapt.

1. Search query template (YouTube Data API)

  1. Use channel IDs list as input.
  2. Fetch uploads playlist for each channel and iterate to pull video metadata.
  3. Filter by published_at within your pre/post windows.

2. Manual coding checklist (for each sampled video)

  • Does the video contain educational material about a sensitive topic? (Yes/No)
  • Primary category (choose from codebook)
  • Is the presentation non-graphic? (Yes/No)
  • Tone: informational / advocacy / personal narrative / promotional
  • Rater confidence (1–5)

3. Minimal reproducible analysis pipeline

  1. Data collection script (Python): fetch metadata and transcripts
  2. Preprocessing: normalize timestamps, clean text
  3. Topic labeling: automated classifier -> produce label column
  4. Validation: randomly sample and human-code 15% of videos
  5. Statistics: run ITS and DiD models, generate event-study plots
  6. Report: create a concise results dashboard (RMarkdown or Jupyter) with key tables and visuals

Expected findings and 2026 predictions

Based on platform incentives and the January 2026 policy change, plausible outcomes include:

  • A measurable increase in the frequency of sensitive educational videos from channels that previously self-censored.
  • Stronger effects in long-form educational channels than in shorts, since YouTube explicitly updated ad rules for long-form content first.
  • Smaller reach gains if recommendation systems are conservative about surfacing sensitive content despite monetization allowance.
  • Creators using AI tools (assistant scripts, auto-edits) to scale sensitive-topic content while minimizing risk — a 2026 trend emerging as generative AI tools improved content drafting and compliance checks in late 2025.
Design the study so that the results teach you about causality, not just correlation. Use multiple methods (ITS, DiD, and qualitative validation) to build a consistent story.

How teachers can use this project in class

  • Split students into teams: API/data collection, NLP/classification, statistical analysis, and ethics/communication.
  • Turn the codebook and validation exercise into a lab session on interrater reliability.
  • Require a short preregistration and public repo for reproducibility practice.

Next steps and practical checklist

  1. Finalize codebook and pilot 30 videos
  2. Assemble channel list and get API access
  3. Collect metadata and transcripts for your windows
  4. Run automated classification and validate with human coding
  5. Perform ITS/DiD analyses and robustness checks
  6. Prepare a reproducible report and share aggregated results

Closing: why this matters beyond the classroom

Measuring the relationship between monetization policy and creator topic choice helps answer a larger question: how platform incentives shape public education. In a media environment where policy shifts (like YouTube’s January 2026 update) can change what audiences see and what creators feel safe to cover, empirical student research provides evidence for educators, platforms, and policymakers.

Call to action

Ready to run this study? Start with the pilot: pick 30 videos, write a two-page codebook, and post your preregistration. Share your repo or questions with our community of student experimenters to get feedback. If you want a starter template (codebook, API scripts, and analysis notebooks), request the template and I’ll share an editable starter kit for classrooms and independent learners.

Advertisement

Related Topics

#research#YouTube#student project
U

Unknown

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-02-21T01:26:40.025Z