Skip to Content
SafetyContent Moderation Policy

Content Moderation Policy

Pichr’s content moderation approach combines automated AI detection with human review to maintain a safe, UK-compliant platform.

Last Updated: 13 November 2025


Our Moderation Approach

Multi-Layered System

  1. Automated Detection (AI)

    • Every upload is scanned by Cloudflare Workers AI
    • NSFW scoring and categorization
    • Instant flagging of high-risk content
  2. Human Review (Moderators)

    • Trained moderation team reviews flagged content
    • Context-aware decision making
    • Appeals and edge case handling
  3. Community Reports (Users)

    • User-generated reports for policy violations
    • Community-driven safety
    • Priority routing for severe violations

Automated Moderation (AI)

How It Works

Every image uploaded to Pichr is automatically scanned:

Technology: Cloudflare Workers AI Speed: Real-time (during upload finalization) Accuracy: ~95% for common NSFW categories

Categories Analyzed

CategoryDescriptionThreshold
Adult/SexualNudity, sexual acts, explicit content≥0.7 = age-restrict
ViolenceGore, graphic violence, injuries≥0.7 = age-restrict
OffensiveHate symbols, shocking imagery≥0.7 = flag for review
MedicalSurgery, anatomy, medical procedures≥0.7 = age-restrict

Automatic Actions

Based on AI confidence scores:

  • Score 0.0-0.69: No action (safe content)
  • Score 0.70-0.79: Auto age-restrict + moderation queue
  • Score 0.80-0.89: Auto age-restrict + priority review
  • Score 0.90-1.0: Auto-remove + immediate moderator review

Human Moderation

Our Moderation Team

  • Size: 3-5 trained moderators (scaling as needed)
  • Availability: 24/7 monitoring for critical reports
  • Training: UK Online Safety Act compliance, trauma-informed approach
  • Oversight: Quality assurance and regular audits

What Moderators Review

  1. High-Confidence AI Flags (score ≥0.8)
  2. User Reports (all categories)
  3. Appeals (incorrect removals/restrictions)
  4. Edge Cases (borderline content, context-dependent)
  5. Repeat Offenders (pattern analysis)

Review Process

Standard Review (72 Hours)

  1. Content Assessment

    • View image and metadata
    • Check NSFW score and categories
    • Review user history
  2. Policy Check

    • Does content violate Terms of Service?
    • Is age restriction appropriate?
    • Are there mitigating factors (context, artistic merit)?
  3. Decision

    • No action (content is safe)
    • Age-restrict (NSFW but allowed)
    • Remove (policy violation)
    • Ban user (severe or repeat violation)
  4. Documentation

    • Log decision in audit trail
    • Update moderation statistics
    • Notify user if action taken

Priority Review (24 Hours)

For high-severity reports (CSAM, terrorism, violence):

  1. Immediate Escalation

    • Flagged to senior moderator
    • Reviewed within 1-2 hours
  2. Emergency Actions

    • Content removed immediately (if illegal)
    • User suspended pending review
    • Authorities contacted (if required)
  3. Follow-Up

    • Full investigation
    • Permanent action decided
    • User notified (except in cases of illegal content)

Moderation Queue

Priority System

PriorityResponse TimeExamples
Critical1-2 hoursCSAM, terrorism, imminent harm
High24 hoursViolence, graphic content, severe harassment
Medium48 hoursAdult content reports, copyright claims
Low72 hoursSpam, minor policy violations

Queue Management

  • Critical reports: Immediate notification to senior moderators
  • High priority: Reviewed within 24 hours
  • Medium/Low: Reviewed in order received
  • Overflow: Additional moderators on-call during high volume

Enforcement Actions

Warning (First-Time Minor Violations)

Trigger: First policy violation that isn’t severe

Action:

  • User receives email notification
  • Content may be age-restricted or removed
  • Account remains active
  • Warning logged in user history

Example: First-time upload of adult content without age restriction


Content Removal (Policy Violations)

Trigger: Content violates Terms of Service or Acceptable Use Policy

Action:

  • Content removed from platform
  • User notified with reason
  • Option to appeal decision
  • Account remains active (unless repeat violation)

Example: Copyright infringement, spam, misleading content


Temporary Suspension (Repeat Violations)

Trigger: Multiple policy violations within short timeframe

Duration: 7-30 days (depending on severity)

Action:

  • Account access suspended
  • Content remains visible (unless removed separately)
  • User notified with specific violations listed
  • Can appeal after suspension period

Example: Multiple spam uploads, repeated harassment


Permanent Ban (Severe Violations)

Trigger: Severe policy violation or pattern of abuse

Action:

  • Account permanently disabled
  • All content removed
  • IP/device ban (to prevent re-registration)
  • Authorities notified (if illegal content)
  • No appeals (except for proven false positives)

Examples:

  • CSAM upload
  • Terrorist content
  • Extreme violence
  • Coordinated harassment campaign
  • Ban evasion

Appeals Process

Eligibility

You can appeal if:

  • Your content was removed incorrectly
  • Your account was suspended/banned in error
  • You believe the AI score was wrong
  • You have new context or information

You cannot appeal if:

  • Content contained CSAM (zero tolerance)
  • Content promoted terrorism
  • You have a history of repeat violations
  • You were banned for ban evasion

How to Appeal

  1. Email: appeals@pichr.io

  2. Include:

    • Your account email
    • Image ID or account action reference
    • Detailed explanation of why action was incorrect
    • Any supporting evidence or context
  3. Review:

    • Senior moderator reviews within 5 business days
    • Decision is final
  4. Outcome:

    • Upheld: Original action stands
    • Overturned: Content restored, suspension lifted
    • Modified: Different action applied (e.g., age-restrict instead of remove)

Appeal Success Rate

Based on our data:

  • ~15% of appeals are fully successful
  • ~10% result in modified actions
  • ~75% of original decisions are upheld

Most successful appeals involve:

  • Artistic/educational content misidentified as NSFW
  • Context not considered by AI
  • Satire or commentary misunderstood

Transparency & Accountability

Public Reporting

We publish quarterly transparency reports detailing:

  • Total moderation actions taken
  • Breakdown by category and action type
  • AI vs human-initiated removals
  • Appeal statistics
  • Compliance with legal requests

View latest transparency report →

Moderation Oversight

  • Internal Audits: Monthly quality assurance reviews
  • External Review: Annual third-party audit
  • User Feedback: Regular surveys and feedback analysis
  • Continuous Improvement: Policy updates based on data

UK Online Safety Act Compliance

Regulatory Requirements

Under the UK Online Safety Act 2023, Pichr must:

  1. Risk Assessment: Identify and mitigate harms on the platform
  2. Proactive Moderation: Don’t wait for reports—actively detect harmful content
  3. User Empowerment: Provide reporting tools and safety controls
  4. Transparency: Publish moderation statistics and policies
  5. Accountability: Cooperate with Ofcom (UK regulator)

How We Comply

Automated Detection: AI scans every upload ✅ Human Oversight: Trained moderation team ✅ 24/7 Monitoring: Critical reports handled immediately ✅ User Tools: Report button, Safe Mode, age gates ✅ Public Reporting: Quarterly transparency reports ✅ Regulator Cooperation: Open communication with Ofcom


Moderator Well-Being

We recognize that content moderation can be traumatic work:

Support Measures

  • Trauma-Informed Training: Moderators receive mental health support training
  • Rotation System: No moderator reviews extreme content exclusively
  • Counseling: Free access to professional counseling services
  • Breaks: Mandatory breaks during shifts
  • Peer Support: Team debriefs and support groups

Quality of Life

  • Fair Pay: Above-industry-standard compensation
  • Reasonable Hours: No mandatory overtime
  • Tools: Blurring, thumbnail views, AI assist to reduce direct exposure
  • Recognition: Celebrating the important work moderators do

Frequently Asked Questions

Why was my content removed?

Check your email for a notification explaining the removal. Common reasons:

  • NSFW score ≥0.9 (automatic removal)
  • User reports resulted in policy violation finding
  • Copyright infringement (DMCA takedown)

Can I get a second opinion?

Yes. Email appeals@pichr.io with your image ID and explanation. A senior moderator will review.

How long does an appeal take?

Standard appeals are reviewed within 5 business days. Critical appeals (account bans) are prioritized and reviewed within 2 business days.

What if I disagree with the appeal decision?

Appeal decisions are final. However, if you have new evidence or information not included in the original appeal, you may submit a new appeal.

Do you use my reports for training?

Yes. Reported content (with identifying information removed) is used to train our AI systems and improve moderation accuracy. You can opt out by emailing privacy@pichr.io.

Can I be a moderator?

We occasionally hire moderators. Requirements:

  • 18+ years old
  • UK resident
  • Strong judgment and decision-making skills
  • Ability to handle sensitive content
  • Email: jobs@pichr.io to inquire

Resources


Contact

Moderation Questions

Appeals

Compliance & Regulatory


Our commitment: Pichr will always prioritize user safety while respecting freedom of expression within the bounds of UK law.