Acceptable Use Policy

1. Introduction and Purpose

1.1 Policy Intent

Welcome to Alludium's Acceptable Use Policy (AUP). This policy defines what you can and cannot do when using our AI platform services. Our goal is to maintain a safe, secure, and productive environment for all users while ensuring compliance with applicable laws and regulations.

By using Alludium's services, you agree to comply with this AUP in addition to our Terms of Service and Privacy Policy. This policy helps protect our community, our platform infrastructure, and the broader internet ecosystem.

1.2 Our Commitment

We are committed to:

  • Safety First: Protecting users from harmful content and activities

  • Innovation Support: Enabling legitimate AI development and use cases

  • Legal Compliance: Meeting all applicable regulatory requirements

  • Fair Enforcement: Applying policies consistently and transparently

  • User Rights: Respecting privacy and providing due process

1.3 Your Responsibility

As an Alludium user, you are responsible for:

  • Understanding and following this policy

  • Using our services only for lawful purposes

  • Respecting the rights and safety of others

  • Reporting violations when you encounter them

  • Keeping your account secure and monitoring its use

2. Scope and Applicability

2.1 Covered Services

This AUP applies to all Alludium services, including:

  • The Alludium AI platform and website

  • AI agent creation and deployment tools

  • APIs and third-party integrations

  • Community forums and support channels

  • Any other services provided by Alludium Ltd

2.2 Covered Users

This policy applies to:

  • Individual users and account holders

  • Enterprise customers and their authorized users

  • API developers and integration partners

  • Community members and forum participants

  • Anyone accessing our services in any capacity

2.3 Geographic Scope

This policy applies globally to all users regardless of location, while also incorporating specific requirements for users in:

  • European Union and European Economic Area (EU AI Act, GDPR)

  • United Kingdom (UK GDPR, Online Safety Act)

  • United States (various federal and state regulations)

  • Other jurisdictions with applicable AI and data protection laws

Jurisdictional Conflicts: Where local laws in your jurisdiction provide greater user protections or impose different requirements than this policy, those local laws shall prevail to the extent of the conflict. We will make reasonable efforts to accommodate local legal requirements while maintaining platform safety and security.

3. Prohibited Activities

3.1 Absolutely Prohibited (Zero Tolerance)

The following activities result in immediate account termination without warning:

3.1.1 Child Safety Violations

  • Creating, sharing, or requesting content that sexualizes minors

  • Generating child sexual abuse material (CSAM) in any form

  • Grooming, exploiting, or endangering children

  • Creating AI agents designed to interact inappropriately with minors

3.1.2 Illegal Content Generation

  • Producing terrorist content or violent extremist materials

  • Creating detailed instructions for dangerous illegal activities

  • Generating content that facilitates human trafficking

  • Producing guides for manufacturing illegal drugs or weapons

  • Creating content that violates export control laws

3.1.3 Critical Security Threats

  • Developing malware, ransomware, viruses, or other malicious code

  • Creating tools for unauthorized system access or data theft

  • Coordinating cyberattacks or distributed denial of service (DDoS) attacks

  • Attempting to compromise Alludium's infrastructure or security

  • Generating content designed to harm critical infrastructure

3.2 Serious Violations (Account Suspension)

The following activities result in immediate account suspension and manual review:

3.2.1 Harmful Content Creation

  • Generating hate speech targeting individuals or groups based on protected characteristics

  • Creating content that promotes self-harm, suicide, or eating disorders

  • Producing content that harasses, bullies, or threatens specific individuals

  • Generating non-consensual intimate content or "revenge porn"

  • Creating extremely graphic violent content

3.2.2 Platform Abuse

  • Attempting to reverse engineer or extract our AI models

  • Using prompt injection or other techniques to circumvent safety measures

  • Creating automated systems that abuse our platform or APIs

  • Deliberately overloading our systems or consuming excessive resources

  • Attempting to access other users' accounts or data

3.2.3 Deception and Manipulation

  • Creating large-scale misinformation or disinformation campaigns

  • Generating deepfakes or synthetic media without clear disclosure

  • Impersonating real people, organizations, or brands for deceptive purposes

  • Creating AI agents designed to manipulate elections or democratic processes

  • Coordinating inauthentic behavior across multiple accounts

3.3 Policy Violations (Warning and Monitoring)

The following activities result in warnings and increased monitoring:

3.3.1 Professional Service Misuse

  • Providing medical, legal, or financial advice without appropriate disclaimers

  • Impersonating licensed professionals without proper credentials

  • Offering regulated services without required authorizations

  • Creating AI agents that make professional recommendations without human oversight

3.3.2 Privacy and Data Violations

  • Processing personal data without proper consent or legal basis

  • Creating surveillance systems that violate privacy rights

  • Using facial recognition or biometric identification for prohibited purposes

  • Collecting or using personal information in violation of applicable laws

3.3.3 Commercial Abuse

  • Generating spam or unsolicited commercial communications

  • Creating content that infringes intellectual property rights

  • Engaging in deceptive advertising or marketing practices

  • Using our platform for price manipulation or unfair competition

4. AI-Specific Usage Restrictions

4.1 Model Security and Integrity

You must not:

  • Attempt to reverse engineer, extract, or copy our AI models

  • Try to discover training data or proprietary information about our models

  • Use techniques designed to manipulate AI responses beyond intended functionality

  • Share or distribute any proprietary information about our AI systems

  • Attempt to create competing AI services using our technology

4.2 AI Agent Development Standards

When creating AI agents, you must:

  • Clearly disclose when users are interacting with AI systems

  • Implement appropriate safeguards to prevent harmful outputs

  • Maintain human oversight for high-stakes decisions

  • Respect usage limits and not attempt to circumvent restrictions

  • Monitor agent behavior and correct issues promptly

4.3 Prohibited AI Applications

You may not create AI agents for:

  • Autonomous weapons systems or military applications without authorization

  • Real-time biometric identification in public spaces (except where legally permitted)

  • Social scoring systems based on personal characteristics or behavior

  • Emotion recognition in workplace or educational settings (except for safety)

  • Subliminal manipulation or techniques designed to exploit psychological vulnerabilities

4.4 High-Risk AI Systems

For AI systems that may significantly impact individuals, you must:

  • Implement meaningful human oversight and intervention capabilities

  • Provide clear information about automated decision-making

  • Maintain appropriate accuracy and performance standards

  • Document risk assessment and mitigation measures

  • Enable human review of automated decisions when requested

5. Content and Safety Standards

5.1 Prohibited Content Types

You may not use our services to create, share, or distribute:

5.1.1 Harmful to Minors

  • Any content that sexualizes, exploits, or endangers children

  • Content that could be used to groom or harm minors

  • Age-inappropriate content shared in spaces accessible to children

5.1.2 Violent and Graphic Content

  • Detailed depictions of extreme violence or torture

  • Content glorifying or promoting violence against specific individuals

  • Instructional content for causing physical harm

5.1.3 Hate Speech and Discrimination

  • Content attacking individuals based on race, religion, gender, sexual orientation, disability, or other protected characteristics

  • Content promoting discrimination or exclusion of protected groups

  • Content that dehumanizes or promotes hostility toward specific communities

5.1.4 Harassment and Abuse

  • Content designed to harass, bully, or intimidate specific individuals

  • Coordinated harassment campaigns or brigading

  • Doxxing or sharing private information to facilitate harassment

5.2 Adult Content Restrictions

While we generally permit adult content for appropriate use cases:

  • No non-consensual content: All depicted individuals must have consented

  • No exploitation: Content must not exploit or objectify individuals

  • Age verification required: Robust age verification for adult content generation

  • Clear labeling: Adult content must be clearly identified and appropriately restricted

5.3 Misinformation and Deception

You may not use our platform to:

  • Create false information designed to deceive or mislead

  • Generate fake news or propaganda campaigns

  • Create synthetic media (deepfakes) without clear disclosure

  • Impersonate real individuals, organizations, or brands deceptively

  • Spread conspiracy theories or dangerous health misinformation

6. Security and Technical Requirements

6.1 Platform Security

You must not:

  • Attempt to gain unauthorized access to our systems or other users' accounts

  • Probe, scan, or test the vulnerability of our systems

  • Breach or circumvent our security or authentication measures

  • Access data or content that you are not authorized to access

  • Interfere with the proper operation of our services

6.2 Resource Usage

You must:

  • Respect rate limits and usage quotas for your account type

  • Use resources efficiently and not engage in wasteful consumption

  • Avoid overloading our systems with excessive requests

  • Report technical issues rather than attempting to exploit them

  • Follow API guidelines and integration best practices

6.3 Data Protection

You are responsible for:

  • Securing your account with strong passwords and appropriate access controls

  • Protecting API keys and authentication credentials

  • Monitoring account activity and reporting suspicious behavior

  • Implementing appropriate data protection measures for sensitive information

  • Complying with data protection laws applicable to your use case

7. Legal and Regulatory Compliance

7.1 Applicable Laws

You must comply with all applicable laws and regulations, including:

  • Local, national, and international laws in your jurisdiction

  • AI-specific regulations (EU AI Act, emerging national AI laws)

  • Data protection laws (GDPR, CCPA, etc.)

  • Export control and sanctions regulations

  • Industry-specific regulations relevant to your use case

7.2 Export Controls

You must not:

  • Use our services in violation of export control laws

  • Provide access to individuals or entities on prohibited party lists

  • Use our services for dual-use applications without proper authorization

  • Export AI technology to restricted countries or end-users

  • Engage in activities contrary to national security interests

7.3 Sanctions Compliance

Our services are not available to:

  • Individuals or entities on sanctions lists (OFAC, EU, UN, etc.)

  • Users in countries subject to comprehensive sanctions

  • Organizations controlled by sanctioned parties

  • Activities that would violate applicable sanctions regimes

7.4 Professional Regulations

If you use our services to provide professional services, you must:

  • Maintain appropriate licenses and certifications

  • Include required disclaimers and disclosures

  • Comply with professional standards and ethics

  • Obtain necessary approvals for regulated activities

  • Maintain appropriate professional liability coverage

8. User Responsibilities

8.1 Account Management

You are responsible for:

  • Maintaining account security and protecting your credentials

  • Monitoring all activity under your account

  • Ensuring authorized users comply with this policy

  • Promptly reporting any unauthorized access or suspicious activity

  • Keeping contact information current for important notifications

8.2 Content Oversight

You must:

  • Review and monitor content generated by your AI agents

  • Implement appropriate controls to prevent policy violations

  • Respond promptly to reports of problematic content

  • Maintain logs of AI agent activities as may be required

  • Take corrective action when violations are identified

8.3 Reporting Obligations

You should report:

  • Policy violations by other users or content

  • Technical issues that could affect platform security

  • Bugs or vulnerabilities you discover

  • Suspicious activities that may indicate abuse

  • Legal process or government requests related to your use

8.4 Cooperation with Investigations

You agree to:

  • Cooperate fully with Alludium's investigation of policy violations

  • Provide requested information relevant to investigations

  • Preserve relevant data when requested for legal or safety reasons

  • Respond promptly to communications about policy matters

  • Participate in good faith in the resolution of disputes

9. Violation Detection and Reporting

9.1 Detection Methods

We use various methods to detect policy violations:

  • Human review of reported content and accounts (primary method)

  • User reports from our community

  • Basic monitoring tools for technical violations (e.g., rate limiting, resource abuse)

  • Manual analysis of usage patterns when investigating reports

No Automated Decision-Making: At this time, we do not use automated systems to make enforcement decisions about policy violations. All enforcement actions are reviewed and decided by human moderators. If we implement automated enforcement systems in the future, we will update this policy and provide appropriate notices regarding your rights under applicable data protection laws.

9.2 Reporting Violations

To report a policy violation:

Email: abuse@alludium.ai
Subject: Policy Violation Report
Include:

  • Description of the violation

  • Relevant URLs, usernames, or content identifiers

  • Screenshots or evidence (if appropriate)

  • Your contact information for follow-up

Response Time: We aim to acknowledge reports within 24 hours and complete initial review within 72 hours.

9.3 Anonymous Reporting

We accept anonymous reports, but investigation may be limited without contact information for follow-up questions.

9.4 False Reports

Deliberately false or malicious reports may result in:

  • Warning or account restrictions for the reporting user

  • Limitation of reporting privileges

  • Account suspension for repeat false reporting

  • Legal action in cases of malicious false reporting

10. Enforcement Procedures

10.1 Investigation Process

When we receive a violation report or detect suspicious activity:

  1. Initial Assessment

    • We aim to acknowledge reports within 24-48 hours during business days

    • Review the reported content or behavior for immediate safety concerns

    • Determine if urgent action is needed to prevent harm

    • Begin formal investigation if warranted

  2. Investigation

    • We aim to complete investigations within 5-7 business days

    • Gather relevant evidence and documentation

    • Review user history and context

    • Consult with legal and safety teams as needed for complex cases

  3. Decision

    • We aim to make enforcement decisions within 2-3 business days of completing investigation

    • Determine if a violation occurred based on available evidence

    • Decide on appropriate enforcement action considering all circumstances

    • Document decision and rationale for our records

  4. Notification

    • We aim to notify affected users within 24-48 hours of making a decision

    • Provide specific details about findings and actions

    • Include information about appeals process where applicable

    • Implement enforcement measures as determined

Timeline Flexibility: These timeframes are targets based on typical cases. Complex investigations, high report volumes, holidays, or other exceptional circumstances may require additional time. We will inform you if significant delays are expected in your specific case.

10.2 User Notification

We will notify you of enforcement actions as follows:

Advance Notice: For non-urgent violations, we aim to provide 24-48 hours advance notice before taking enforcement action, allowing you to:

  • Respond with additional context or explanation

  • Voluntarily correct the violation where possible

  • Prepare for any service interruption

Immediate Action Notice: For urgent safety, security, or legal compliance issues, we may take immediate action and notify you simultaneously, including:

  • Content that poses immediate harm to individuals

  • Security threats to our platform or other users

  • Legal compliance requirements with short deadlines

  • Situations where advance notice could worsen the harm

Notification Content: All enforcement notifications will include:

  • Specific description of the violation found

  • Reference to the relevant policy section

  • Explanation of the enforcement action being taken

  • Information about your appeal rights (where applicable)

  • Timeline for any required corrective actions

  • Contact information for questions

Right to Respond: Except in urgent safety situations, you have the right to provide additional context or explanation before final enforcement decisions are made.

10.3 Evidence Preservation

We maintain:

  • Detailed logs of violation investigations

  • Evidence supporting enforcement decisions

  • Documentation of appeals and their outcomes

  • Anonymized data for policy improvement

  • Legal compliance records as required by law

11. Consequences and Penalties

11.1 Enforcement Tiers

Our enforcement follows a graduated approach based on violation severity:

Tier 1: Zero Tolerance Violations

  • Immediate account termination

  • Permanent ban from all services

  • Content removal and data deletion

  • Possible law enforcement referral

  • Limited appeals (safety review only - see Section 12.6)

Examples: Child exploitation, terrorism, critical security threats

Tier 2: Serious Violations

  • Immediate account suspension

  • Manual review within 5-7 business days

  • Possible account restoration with restrictions

  • Required corrective measures

  • Full appeal process available

Examples: Hate speech, platform abuse, large-scale misinformation

Tier 3: Policy Violations

  • Warning and explanation

  • Content removal or restrictions

  • Account monitoring and restrictions

  • Escalation for repeat violations

  • Simple appeal and correction process available

Examples: Professional service misuse, minor privacy violations, commercial abuse

11.2 Progressive Enforcement

For Tier 3 violations, we typically follow:

  1. First violation: Warning and education

  2. Second violation: Temporary restrictions

  3. Third violation: Account suspension

  4. Fourth violation: Account termination

11.3 Factors Considered

When determining consequences, we consider:

  • Severity and scale of the violation

  • Intent and knowledge of the user

  • User's violation history and cooperation

  • Harm caused to individuals or the platform

  • Legal and safety requirements

11.4 Account Restoration Requirements

For suspended accounts to be restored:

  • Acknowledgment of the violation

  • Corrective measures implemented

  • Agreement to additional monitoring

  • Compliance with any specific requirements

  • Time period as determined appropriate

12. Appeals and Reinstatement

12.1 Appeal Rights by Violation Tier

Your appeal rights depend on the severity of the violation:

Tier 3 Violations (Simple Appeals)

  • Full right to appeal all enforcement actions

  • Informal review process

  • Direct communication with enforcement team

  • Rapid resolution aimed for within 3-5 business days

Tier 2 Violations (Standard Appeals)

  • Full right to appeal all enforcement actions

  • Formal review process with documentation

  • Senior moderator review

  • Resolution aimed for within 7-10 business days

Tier 1 Violations (Limited Safety Appeals)

  • Right to appeal on limited grounds only:

    • Identity error (wrong user/account)

    • Technical error in detection

    • Content was not actually violating (misidentification)

    • Evidence of account compromise

  • Focus on factual accuracy, not proportionality

  • Independent safety review when possible

  • Resolution aimed for within 10-15 business days

12.2 Appeal Process

To file an appeal:

Email: appeals@alludium.ai
Subject: [Tier Level] Appeal - [Account/Content ID]
Include:

  • Your account information

  • Description of the enforcement action

  • Specific grounds for appeal

  • Any relevant evidence or context

  • Requested resolution

12.3 Appeal Review Process

We will:

  • Acknowledge your appeal within 48 hours during business days

  • Assign appropriate reviewer based on violation tier

  • Review all relevant evidence and your appeal submission

  • Consult with original decision makers and additional reviewers as needed

  • Respond with our decision within the timeframe specified for your tier

12.4 Appeal Review Standards

For Tier 3 & 2 Appeals: We will consider:

  • Whether a violation actually occurred

  • Whether our response was proportionate

  • Any mitigating circumstances or context

  • Your cooperation and corrective actions

  • Your overall account history

For Tier 1 Limited Appeals: We will only consider:

  • Whether you were correctly identified as the violator

  • Whether the content actually violated our zero-tolerance policies

  • Whether technical errors affected the decision

  • Whether your account was compromised

12.5 Appeal Outcomes

Possible outcomes include:

  • Appeal Upheld: Enforcement action reversed, account/content restored

  • Appeal Partially Upheld: Reduced enforcement action

  • Appeal Denied: Original enforcement action stands

  • Additional Review Required: Escalation to senior team or external review

12.6 Limited Appeals for Zero Tolerance Violations

Rationale: Some violations are so serious that they pose immediate danger to individuals or society. For these violations, our review focuses on accuracy rather than proportionality.

Available Grounds:

  • Identity Error: "This wasn't my account/content"

  • Content Misidentification: "The content didn't actually violate the policy"

  • Technical Error: "A system error caused incorrect action"

  • Account Compromise: "My account was hacked/compromised"

Not Available Grounds:

  • Proportionality arguments ("The punishment was too severe")

  • Intent arguments ("I didn't mean to violate the policy")

  • Context arguments ("There were special circumstances")

Review Process: Independent safety reviewer when available, focus on factual accuracy, documented decision rationale.

12.7 Appeal Limitations

  • Frivolous appeals may result in loss of appeal privileges

  • Repeated appeals for the same issue will not be reviewed after final decision

  • Abuse of appeals process may result in additional restrictions

  • Legal remedies remain available as provided by applicable law

12.8 Reinstatement Process

If an appeal is successful:

  • Account access will be restored within 24-48 hours

  • Content will be restored where appropriate and still available

  • Restrictions will be removed or modified as determined

  • Records will be updated to reflect the correction

  • Notification will be provided confirming the restoration

Post-Reinstatement: Some cases may include additional monitoring or restrictions as part of reinstatement terms.

13. Updates and Changes

13.1 Policy Updates

We may update this policy to:

  • Address new risks or threats to our platform

  • Comply with legal requirements or regulations

  • Improve clarity or user understanding

  • Incorporate lessons learned from enforcement experience

  • Align with industry best practices and standards

13.2 Notification of Changes

We will notify you of material changes:

  • 30 days in advance for significant policy changes

  • Via email to your registered account address

  • Through platform notifications when you next access our services

  • On our website with highlighted changes

  • With explanation of the reasons for changes

13.3 Continued Use

Your continued use of our services after policy changes constitutes acceptance of the updated policy. If you disagree with changes, you may terminate your account before they take effect.

14. Contact Information

14.1 Policy Questions

For questions about this policy:

Email: policy@alludium.ai
Subject: AUP Policy Question

14.2 Violation Reporting

To report policy violations:

Email: abuse@alludium.ai
Subject: Policy Violation Report

14.3 Appeals

To appeal enforcement actions:

Email: appeals@alludium.ai
Subject: Enforcement Appeal

14.4 General Contact

**Alludium Ltd
** Company Number: 15062888
Email: legal@alludium.ai
Website: https://www.alludium.ai/
Registered Office: International House, 36-38 Cornhill, London, United Kingdom, EC3V 3NG

14.5 Emergency Contact

For urgent safety or security issues:

Email: security@alludium.ai
Subject: URGENT - Security Issue

We monitor security reports 24/7 and will respond immediately to genuine emergencies.

Legal Integration Notice

This Acceptable Use Policy is incorporated by reference into our Terms of Service and forms part of your agreement with Alludium Ltd. Violations of this policy constitute breaches of your Terms of Service and may result in termination of your account and access to our services.

This policy complies with applicable laws including the EU AI Act, GDPR, UK Data Protection Act 2018, California Consumer Privacy Act, and other relevant regulations. We reserve the right to take additional actions as required by applicable law.

Effective Date: August 1, 2025
Last Updated: August 12, 2025

This policy is designed to create a safe, productive environment for all Alludium users while enabling innovative AI development and deployment. We appreciate your cooperation in maintaining our community standards.