Content Moderation Policy
Pichr’s content moderation approach combines automated AI detection with human review to maintain a safe, UK-compliant platform.
Last Updated: 13 November 2025
Our Moderation Approach
Multi-Layered System
-
Automated Detection (AI)
- Every upload is scanned by Cloudflare Workers AI
- NSFW scoring and categorization
- Instant flagging of high-risk content
-
Human Review (Moderators)
- Trained moderation team reviews flagged content
- Context-aware decision making
- Appeals and edge case handling
-
Community Reports (Users)
- User-generated reports for policy violations
- Community-driven safety
- Priority routing for severe violations
Automated Moderation (AI)
How It Works
Every image uploaded to Pichr is automatically scanned:
Technology: Cloudflare Workers AI Speed: Real-time (during upload finalization) Accuracy: ~95% for common NSFW categories
Categories Analyzed
| Category | Description | Threshold |
|---|---|---|
| Adult/Sexual | Nudity, sexual acts, explicit content | ≥0.7 = age-restrict |
| Violence | Gore, graphic violence, injuries | ≥0.7 = age-restrict |
| Offensive | Hate symbols, shocking imagery | ≥0.7 = flag for review |
| Medical | Surgery, anatomy, medical procedures | ≥0.7 = age-restrict |
Automatic Actions
Based on AI confidence scores:
- Score 0.0-0.69: No action (safe content)
- Score 0.70-0.79: Auto age-restrict + moderation queue
- Score 0.80-0.89: Auto age-restrict + priority review
- Score 0.90-1.0: Auto-remove + immediate moderator review
Human Moderation
Our Moderation Team
- Size: 3-5 trained moderators (scaling as needed)
- Availability: 24/7 monitoring for critical reports
- Training: UK Online Safety Act compliance, trauma-informed approach
- Oversight: Quality assurance and regular audits
What Moderators Review
- High-Confidence AI Flags (score ≥0.8)
- User Reports (all categories)
- Appeals (incorrect removals/restrictions)
- Edge Cases (borderline content, context-dependent)
- Repeat Offenders (pattern analysis)
Review Process
Standard Review (72 Hours)
-
Content Assessment
- View image and metadata
- Check NSFW score and categories
- Review user history
-
Policy Check
- Does content violate Terms of Service?
- Is age restriction appropriate?
- Are there mitigating factors (context, artistic merit)?
-
Decision
- No action (content is safe)
- Age-restrict (NSFW but allowed)
- Remove (policy violation)
- Ban user (severe or repeat violation)
-
Documentation
- Log decision in audit trail
- Update moderation statistics
- Notify user if action taken
Priority Review (24 Hours)
For high-severity reports (CSAM, terrorism, violence):
-
Immediate Escalation
- Flagged to senior moderator
- Reviewed within 1-2 hours
-
Emergency Actions
- Content removed immediately (if illegal)
- User suspended pending review
- Authorities contacted (if required)
-
Follow-Up
- Full investigation
- Permanent action decided
- User notified (except in cases of illegal content)
Moderation Queue
Priority System
| Priority | Response Time | Examples |
|---|---|---|
| Critical | 1-2 hours | CSAM, terrorism, imminent harm |
| High | 24 hours | Violence, graphic content, severe harassment |
| Medium | 48 hours | Adult content reports, copyright claims |
| Low | 72 hours | Spam, minor policy violations |
Queue Management
- Critical reports: Immediate notification to senior moderators
- High priority: Reviewed within 24 hours
- Medium/Low: Reviewed in order received
- Overflow: Additional moderators on-call during high volume
Enforcement Actions
Warning (First-Time Minor Violations)
Trigger: First policy violation that isn’t severe
Action:
- User receives email notification
- Content may be age-restricted or removed
- Account remains active
- Warning logged in user history
Example: First-time upload of adult content without age restriction
Content Removal (Policy Violations)
Trigger: Content violates Terms of Service or Acceptable Use Policy
Action:
- Content removed from platform
- User notified with reason
- Option to appeal decision
- Account remains active (unless repeat violation)
Example: Copyright infringement, spam, misleading content
Temporary Suspension (Repeat Violations)
Trigger: Multiple policy violations within short timeframe
Duration: 7-30 days (depending on severity)
Action:
- Account access suspended
- Content remains visible (unless removed separately)
- User notified with specific violations listed
- Can appeal after suspension period
Example: Multiple spam uploads, repeated harassment
Permanent Ban (Severe Violations)
Trigger: Severe policy violation or pattern of abuse
Action:
- Account permanently disabled
- All content removed
- IP/device ban (to prevent re-registration)
- Authorities notified (if illegal content)
- No appeals (except for proven false positives)
Examples:
- CSAM upload
- Terrorist content
- Extreme violence
- Coordinated harassment campaign
- Ban evasion
Appeals Process
Eligibility
You can appeal if:
- Your content was removed incorrectly
- Your account was suspended/banned in error
- You believe the AI score was wrong
- You have new context or information
You cannot appeal if:
- Content contained CSAM (zero tolerance)
- Content promoted terrorism
- You have a history of repeat violations
- You were banned for ban evasion
How to Appeal
-
Email: appeals@pichr.io
-
Include:
- Your account email
- Image ID or account action reference
- Detailed explanation of why action was incorrect
- Any supporting evidence or context
-
Review:
- Senior moderator reviews within 5 business days
- Decision is final
-
Outcome:
- Upheld: Original action stands
- Overturned: Content restored, suspension lifted
- Modified: Different action applied (e.g., age-restrict instead of remove)
Appeal Success Rate
Based on our data:
- ~15% of appeals are fully successful
- ~10% result in modified actions
- ~75% of original decisions are upheld
Most successful appeals involve:
- Artistic/educational content misidentified as NSFW
- Context not considered by AI
- Satire or commentary misunderstood
Transparency & Accountability
Public Reporting
We publish quarterly transparency reports detailing:
- Total moderation actions taken
- Breakdown by category and action type
- AI vs human-initiated removals
- Appeal statistics
- Compliance with legal requests
View latest transparency report →
Moderation Oversight
- Internal Audits: Monthly quality assurance reviews
- External Review: Annual third-party audit
- User Feedback: Regular surveys and feedback analysis
- Continuous Improvement: Policy updates based on data
UK Online Safety Act Compliance
Regulatory Requirements
Under the UK Online Safety Act 2023, Pichr must:
- Risk Assessment: Identify and mitigate harms on the platform
- Proactive Moderation: Don’t wait for reports—actively detect harmful content
- User Empowerment: Provide reporting tools and safety controls
- Transparency: Publish moderation statistics and policies
- Accountability: Cooperate with Ofcom (UK regulator)
How We Comply
✅ Automated Detection: AI scans every upload ✅ Human Oversight: Trained moderation team ✅ 24/7 Monitoring: Critical reports handled immediately ✅ User Tools: Report button, Safe Mode, age gates ✅ Public Reporting: Quarterly transparency reports ✅ Regulator Cooperation: Open communication with Ofcom
Moderator Well-Being
We recognize that content moderation can be traumatic work:
Support Measures
- Trauma-Informed Training: Moderators receive mental health support training
- Rotation System: No moderator reviews extreme content exclusively
- Counseling: Free access to professional counseling services
- Breaks: Mandatory breaks during shifts
- Peer Support: Team debriefs and support groups
Quality of Life
- Fair Pay: Above-industry-standard compensation
- Reasonable Hours: No mandatory overtime
- Tools: Blurring, thumbnail views, AI assist to reduce direct exposure
- Recognition: Celebrating the important work moderators do
Frequently Asked Questions
Why was my content removed?
Check your email for a notification explaining the removal. Common reasons:
- NSFW score ≥0.9 (automatic removal)
- User reports resulted in policy violation finding
- Copyright infringement (DMCA takedown)
Can I get a second opinion?
Yes. Email appeals@pichr.io with your image ID and explanation. A senior moderator will review.
How long does an appeal take?
Standard appeals are reviewed within 5 business days. Critical appeals (account bans) are prioritized and reviewed within 2 business days.
What if I disagree with the appeal decision?
Appeal decisions are final. However, if you have new evidence or information not included in the original appeal, you may submit a new appeal.
Do you use my reports for training?
Yes. Reported content (with identifying information removed) is used to train our AI systems and improve moderation accuracy. You can opt out by emailing privacy@pichr.io.
Can I be a moderator?
We occasionally hire moderators. Requirements:
- 18+ years old
- UK resident
- Strong judgment and decision-making skills
- Ability to handle sensitive content
- Email: jobs@pichr.io to inquire
Resources
- Safety Center Home
- How to Report Content
- Age Verification Guide
- Terms of Service
- Acceptable Use Policy
Contact
Moderation Questions
- Email: safety@pichr.io
- Response Time: 24-48 hours
Appeals
- Email: appeals@pichr.io
- Response Time: 5 business days
Compliance & Regulatory
- Email: compliance@pichr.io
- Response Time: 48-72 hours
Our commitment: Pichr will always prioritize user safety while respecting freedom of expression within the bounds of UK law.