Skip to main content

AI Training & Your Rights

If a model output looks like your work, or you found your work in a dataset discussion, you deserve orientation without false certainty.

Creator crisis

You are angry because this feels like extraction

A creator discovers or reasonably fears that books, images, songs, code, research, teaching materials, voice, or other creative work may have been copied into AI training pipelines without meaningful consent, attribution, compensation, or usable records. This page treats that fear as real without telling you that you definitely have a claim or definitely do not.

What This Issue Means

  • AI training is not only a visual-art issue. The source base connects it to writers, musicians, photographers, developers, researchers, teachers, journalists, performers, designers, and archive workers.
  • The core rights questions are consent, compensation, transparency, attribution, and records: who used the work, under what permission, for what purpose, and with what payment or disclosure.
  • AI also connects to substitution, licensing, platform dependency, digital likeness, provenance, and income. The rights issue is structural, not only technical.

Source-Safe Current Landscape

  • U.S. law and policy around generative AI training remains contested, fast-moving, and fact-specific.
  • The source matrix treats crawler controls, provenance tools, and opt-out signals as partial tools, not a complete rights or compensation system.
  • The STC evidence base found AI-focused evidence in 35 of 43 discipline sheets, with overlap across payments, portability, discovery, safety, and substitution concerns.

Source footing: Grounded in the Creator Rights PRD, Source Matrix Row 1, the IA recovery inventory, U.S. Copyright Office AI source categories, crawler-control source categories, guild AI resources, and the 43-discipline niche evidence base.

What STC Advocates

STC advocates for a creator-rights framework where technology serves creators: meaningful consent, fair compensation, usable records, attribution that travels, and updated copyright rules that do not leave creators alone against opaque systems.

Demand 1: Fair Compensation for AI Training

Demand 2: Transparent Usage Tracking

Demand 6: Updated Copyright Frameworks

Demand 7: Explicit Consent for Digital Likeness

What Creators Can Do Right Now

  • Preserve URLs, screenshots, contracts, publication records, original files, dates, correspondence, and any evidence of where the work appeared.
  • Separate what you know from what you suspect. Avoid claiming a specific model used your work unless you have evidence or a public record to support it.
  • Read the copyright and consent pages for adjacent questions about authorship, licensing, likeness, voice, style, and identity.
  • For individual legal claims, speak with qualified counsel or verified legal-aid resources before acting on a dispute.
  • Sign the Declaration to join the public demand for consent, compensation, attribution, transparency, and usable records.

Evidence Connection

  • AI-focused evidence appears in 35 of 43 discipline sheets in the STC source inventory.
  • The issue intersects with Payments and Splits, Preservation and Portability, Discovery and Ranking, and Safety and Harassment.
  • Representative evidence lanes include visual artists and illustrators facing style/corpus concerns, writers and journalists facing training-data disputes, audio and voice workers facing synthetic voice clauses, and educators or researchers facing automated derivative work.
Explore Hub Evidence

Rights education, not legal advice

These pages offer general creator-rights education and advocacy orientation. Individual disputes depend on facts, contracts, jurisdiction, platform rules, and current law. Use this as a starting point, preserve records, and seek qualified legal help for individual claims.

Demand Consent, Compensation, and Records

Add your voice to the call for creators worldwide to claim what's rightfully theirs.