API Documentation

Everything you need to integrate moder8r into your application

Moderate Text

Analyze text content for potentially harmful material using AI-powered moderation.

POST
/v1/moderate/text

Moderate Text Content

Submit text content for AI-powered moderation analysis. Returns detailed categorization and confidence scores for various types of harmful content.

Parameters

NameTypeRequiredDescription
contentstring
Required
The text content to moderate. Maximum 10,000 characters.
Example: This is some text to moderate

Request Headers

HeaderValueRequired
AuthorizationBearer m8r_sk_your_key_here
Required
Content-Typeapplication/json
Required

Request

cURL Request
curl -X POST https://api.moder8r.app/v1/moderate/text \
  -H "Authorization: Bearer m8r_sk_your_key_here" \
  -H "Content-Type: application/json" \
  -d '{
    "content": "This is some text that needs moderation checking."
  }'

Response

Success Response
{
  "id": "mod_1a2b3c4d5e6f",
  "status": "completed",
  "content": "This is some text that needs moderation checking.",
  "result": {
    "flagged": false,
    "categories": {
      "hate": {
        "flagged": false,
        "probability": 0.0001
      },
      "harassment": {
        "flagged": false,
        "probability": 0.0002
      },
      "self-harm": {
        "flagged": false,
        "probability": 0.0001
      },
      "sexual": {
        "flagged": false,
        "probability": 0.0003
      },
      "violence": {
        "flagged": false,
        "probability": 0.0001
      }
    },
    "confidence": 0.99,
    "recommendation": "approve"
  },
  "processed_at": "2024-01-15T10:30:45.123Z",
  "usage": {
    "characters": 52,
    "tokens_used": 13
  }
}

Response Fields

id

Unique identifier for this moderation request

status

Processing status: completed, processing, or failed

content

Echo of the submitted content

result.flagged

Boolean indicating if any category was flagged as problematic

result.categories

Object containing analysis for each category: hate, harassment, self-harm, sexual, violence

result.confidence

Overall confidence score (0-1) for the moderation result

result.recommendation

Suggested action: approve, review, or reject

Code Examples

JavaScript/Node.js

// Node.js / JavaScript
const response = await fetch('https://api.moder8r.app/v1/moderate/text', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer m8r_sk_your_key_here',
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({
    content: 'Text to moderate'
  })
});

const data = await response.json();
console.log(data);

Python

# Python
import requests

url = 'https://api.moder8r.app/v1/moderate/text'
headers = {
    'Authorization': 'Bearer m8r_sk_your_key_here',
    'Content-Type': 'application/json'
}
data = {
    'content': 'Text to moderate'
}

response = requests.post(url, headers=headers, json=data)
result = response.json()
print(result)

Category Thresholds

Content is flagged when the probability for any category exceeds these thresholds:

CategoryThresholdDescription
hate0.7Content expressing hatred toward groups
harassment0.7Content intended to harass or bully
self-harm0.5Content promoting self-injury
sexual0.8Adult sexual content
violence0.6Content promoting violence