Acceptable Use Policy

Effective date: 2026-05-11

This Acceptable Use Policy (the “AUP”) applies to every user of Swiftly Studio (the “Service”) and is incorporated into our Terms of Service. By using the Service you agree to comply with this AUP. Violations may result in immediate suspension or termination of your account without refund and may be reported to law enforcement.

1. Prohibited Content

You may not use the Service to generate, upload, store, or distribute any of the following:

1.1 Content involving minors

  • Child sexual abuse material (CSAM)of any kind. We report suspected CSAM to the National Center for Missing & Exploited Children (NCMEC) under 18 U.S.C. § 2258A and to applicable international authorities, and we preserve evidence as required by law.
  • Sexualised or suggestive content involving minors, whether real or AI-generated.
  • Content depicting minors in dangerous, violent, or distressing scenarios.

1.2 Non-consensual intimate content

  • Pornographic content depicting real persons without their explicit, documented consent, including “deepfake” pornography.
  • “Revenge porn”, upskirts, or content created from intimate images shared in confidence.
  • Sexual content involving any identifiable individual without their explicit consent.

1.3 Impersonation and unauthorised likenesses

  • Face-swaps, voice clones, or other synthetic media depicting a real person without their explicit, documented consent. This includes politicians, celebrities, journalists, public figures, and members of your own social circle.
  • Synthetic media intended to imitate a public figure for the purpose of attributing statements or actions to them that they did not say or do.
  • Voice cloning of any real person without that person’s verifiable consent. You may be required to provide proof of consent on request.

1.4 Election integrity

  • Synthetic media depicting candidates, election officials, or election infrastructure intended to deceive voters about a candidate’s statements, position, or conduct.
  • Content designed to suppress voter turnout (e.g. false information about polling locations or eligibility).
  • During elections, additional restrictions may apply; we may flag outputs of public figures during election windows for human review.

1.5 Violence, hatred, and self-harm

  • Content that incites or glorifies terrorism, mass violence, or armed attacks.
  • Hate speech targeting protected groups, including content using protected characteristics (race, ethnicity, religion, gender, sexual orientation, disability, immigration status) to dehumanise.
  • Content that encourages, instructs, or romanticises self-harm, suicide, or eating disorders.
  • Realistic depictions of gore, mutilation, or torture used for shock rather than artistic value.

1.6 Intellectual-property infringement

  • Unlicensed reproductions of copyrighted works, trademarks, or trade dress.
  • Outputs designed to closely mimic the style of a specific named living artist in a way that would mislead viewers about the work’s origin.
  • Content that infringes patents, design rights, or other intellectual property.

1.7 Fraud, deception, and disinformation

  • Fake identification documents, fraudulent product reviews, or forged certifications.
  • Synthetic media intended to deceive viewers about its origin or authenticity in contexts where viewers would reasonably expect authentic media (e.g. news reporting, witness footage).
  • Phishing pages, scam landing pages, romance-scam materials, or credential-harvesting interfaces.

1.8 Regulated content and unsafe advice

  • Medical, legal, financial, or tax advice presented as authoritative guidance from a licensed professional. AI-generated content of this kind must be clearly labelled as not constituting professional advice.
  • Instructions for the creation of weapons (including firearms, bombs, chemical, biological, radiological, or nuclear weapons), illegal drugs, or other prohibited goods.
  • Content that violates the law in your jurisdiction or in the jurisdiction where it is distributed.

2. Prohibited Conduct

You also agree not to:

  • Bypass safety systems — disable, circumvent, or interfere with our content-moderation, rate-limiting, watermarking, or security features.
  • Submit “jailbreak” prompts designed to elicit responses that would otherwise violate this AUP.
  • Extract or replicate models — reverse-engineer, scrape, or otherwise attempt to extract source models, weights, training data, or system prompts from the Service.
  • Train competing models — use Service outputs to train an AI model that competes with the Service or with our AI providers.
  • Resell or share access — share account credentials, resell Service access to third parties, or use a single account on behalf of multiple businesses without our written consent.
  • Abuse infrastructure — flood, denial-of- service, or otherwise abuse our API or the Service in a way that degrades performance for others.
  • Harass others — use Service features (collaboration, sharing, collections) to harass, threaten, or stalk other users.

3. Required Labelling and Transparency

When you publish AI-generated outputs, you must clearly disclose that the content is AI-generated where:

  • The output depicts a real person.
  • The output is used in a political, news, or advocacy context.
  • A reasonable viewer might otherwise be misled about the origin of the content.
  • Required by law (including the EU AI Act).

We may apply visible or invisible watermarks (including C2PA content credentials) to outputs to support provenance verification.

4. Content Moderation

Generation requests may pass through provider-level safety filters offered by our AI partners and through our own prompt screening. We may:

  • Review prompts and outputs (by automated systems and, in investigation, by trained moderators) to detect violations and investigate abuse reports.
  • Block, redact, or watermark outputs that match prohibited categories.
  • Quarantine outputs of public figures during sensitive periods (e.g. elections) for additional review.

Moderation decisions are first made automatically. Human review is available on request (see §6).

5. Reporting Violations

To report content or conduct that violates this AUP:

IssueEmailInclude
Abuse, harassment, prohibited contentabuse@swiftlystudio.comURL or generation ID, description
Copyright infringement (DMCA)dmca@swiftlystudio.comSee Copyright Policy
CSAMabuse@swiftlystudio.com — marked URGENTMinimum detail needed; we coordinate with NCMEC
Trademark or impersonationabuse@swiftlystudio.comEvidence of rights, the infringing content
Right-of-publicity / unauthorised likenessabuse@swiftlystudio.comGovernment ID for verification, the content URL

6. Enforcement and Appeals

6.1 Enforcement actions

Depending on severity and prior history we may, without prior notice:

  • Remove or redact violating content.
  • Apply additional safety filters to your account.
  • Suspend or terminate your account.
  • Withhold or revoke credits.
  • Report violations to law enforcement or relevant authorities (CSAM reports are mandatory).
  • Preserve evidence as required by law.

Repeat infringement of this AUP may result in permanent account termination at our sole discretion.

6.2 Appeals

If your account is suspended or content is removed, you may submit an appeal by emailing support@swiftlystudio.com with your account email and the reason you believe the action was incorrect. We will review appeals where we consider it appropriate to do so. Appeal decisions involving content depicting child sexual abuse or imminent harm to others are final.

7. Changes

We may update this AUP from time to time. Material changes will be communicated by email or in-app notice. Continued use of the Service after changes take effect constitutes acceptance.