홈으로 돌아가기

Content Moderation Policy

Effective date: March 7, 2026

1. Our Commitment

Elyvie is committed to maintaining a safe and responsible platform. We take content moderation seriously and employ a combination of automated systems, human review, and user reporting mechanisms to prevent harmful content from appearing on our platform. This policy explains how we prevent child sexual abuse material (CSAM), verify the age of AI characters, and handle user reports.

2. Zero Tolerance for CSAM

Elyvie maintains an absolute zero-tolerance policy toward child sexual abuse material (CSAM) and any content that depicts, promotes, or facilitates the sexual exploitation of minors. (a) Prohibited Content. It is strictly prohibited to generate, upload, distribute, or request any content depicting minors in sexual or suggestive contexts. This includes AI-generated imagery, text descriptions, and any other form of media. (b) Technical Safeguards. Our AI image generation systems incorporate safety filters and prompt-level restrictions that block attempts to generate content involving minors. These safeguards operate at the model level and cannot be bypassed through prompt engineering. (c) Monitoring. We actively monitor generation requests and outputs for potential policy violations using automated content classification systems. (d) Reporting to Authorities. Any suspected CSAM discovered on the platform will be immediately reported to the National Center for Missing & Exploited Children (NCMEC) and relevant law enforcement authorities, in compliance with applicable laws. (e) Account Termination. Accounts found to be generating, distributing, or requesting CSAM will be permanently terminated without prior notice, and all associated data will be preserved for law enforcement purposes.

3. AI Character Age Verification

All AI characters on the Elyvie platform are fictional and are explicitly designed and verified to represent adults aged 18 or older. (a) Character Design Standards. Every AI character is created with a documented backstory that includes an explicit age of 18 or older. Character profiles include age, occupation, and life circumstances that are consistent with adult characters. (b) Review Process. All new characters undergo internal review before being published on the platform. This review verifies that the character's visual appearance, personality description, backstory, and conversational behavior are consistent with an adult persona. (c) Visual Appearance. Character artwork and AI-generated imagery are designed and reviewed to depict adults. Characters that could be perceived as underage are not permitted on the platform. (d) Ongoing Monitoring. Character interactions are monitored to ensure characters maintain their adult persona during conversations. Any attempt by users to instruct characters to role-play as minors is blocked by the system.

4. Prohibited Content Categories

In addition to CSAM, the following content categories are prohibited on Elyvie: (a) Non-Consensual Content. Any content depicting or promoting sexual violence, non-consensual acts, or coercion. (b) Real Person Exploitation. Content that uses the likeness of real, identifiable individuals in sexual or defamatory contexts without their explicit consent. (c) Illegal Activities. Content that promotes, facilitates, or provides instructions for illegal activities, including but not limited to drug manufacturing, weapons creation, or human trafficking. (d) Hate Speech. Content that promotes violence, discrimination, or hatred against individuals or groups based on race, ethnicity, religion, gender, sexual orientation, disability, or other protected characteristics. (e) Harassment and Threats. Content that constitutes harassment, bullying, stalking, or threats of violence against any individual.

5. Content Filtering and Moderation Systems

Elyvie employs multiple layers of content moderation: (a) Input Filtering. User prompts and messages are screened in real-time for prohibited content patterns before being processed by AI models. (b) Output Filtering. AI-generated text and images are scanned by automated safety classifiers before being delivered to users. Content that fails safety checks is blocked and not displayed. (c) Model-Level Safety. We use AI models that have built-in safety training and alignment to refuse generating harmful content. These safety measures operate at the model inference level. (d) Rate Limiting and Anomaly Detection. Automated systems monitor for unusual patterns of content generation requests that may indicate abuse, such as rapid sequential attempts to generate prohibited content.

6. User Reporting

We encourage users to report any content that violates this policy. (a) How to Report. Users can report violations by emailing jenny@elyvie.ai with a description of the issue. Please include as much detail as possible, including relevant conversation IDs or image URLs. (b) Response Timeline. All content reports are reviewed within 24 hours of receipt. Reports involving potential CSAM or imminent threats of harm are prioritized and reviewed immediately. (c) Reporter Protection. We do not disclose the identity of reporters. Users who submit good-faith reports will not face any adverse action on their accounts. (d) Investigation Process. Upon receiving a report, our team will: review the reported content; determine whether it violates this policy; take appropriate action (content removal, account warning, or account termination); and notify the reporter of the outcome where appropriate.

7. Enforcement Actions

When a policy violation is confirmed, we take the following graduated enforcement actions: (a) Content Removal. Violating content is immediately removed from the platform. (b) Warning. For first-time or minor violations, the account holder receives a warning with a clear explanation of the policy violation. (c) Temporary Suspension. Repeated violations or serious offenses may result in temporary account suspension. (d) Permanent Termination. Severe violations, including any CSAM-related offenses, result in immediate and permanent account termination without prior warning. (e) Legal Referral. Violations involving illegal activity are reported to the appropriate law enforcement authorities.

8. Policy Updates and Contact

This Content Moderation Policy may be updated from time to time. Material changes will be communicated through the platform. The most current version is always available on this page. For questions about this policy or to report a concern, please contact us at: Run Labs LLC (Elyvie) 30 N Gould St Ste R, Sheridan, WY 82801 Email: jenny@elyvie.ai

© 2026 Elyvie