Official Kotlin SDK for the Tuteliq API
AI-powered child safety analysis
API Docs • Dashboard • Trust • Discord
dependencies {
implementation("ai.tuteliq:tuteliq:2.2.4")
}dependencies {
implementation 'ai.tuteliq:tuteliq:2.2.4'
}<dependency>
<groupId>ai.tuteliq</groupId>
<artifactId>tuteliq</artifactId>
<version>2.2.4</version>
</dependency>- Kotlin 1.9+
- Java 17+
import ai.tuteliq.*
import kotlinx.coroutines.runBlocking
fun main() = runBlocking {
val client = Tuteliq(apiKey = "your-api-key")
// Quick safety analysis
val result = client.analyze("Message to check")
if (result.riskLevel != RiskLevel.SAFE) {
println("Risk: ${result.riskLevel}")
println("Summary: ${result.summary}")
}
client.close()
}import ai.tuteliq.Tuteliq
// Simple
val client = Tuteliq(apiKey = "your-api-key")
// With options
val client = Tuteliq(
apiKey = "your-api-key",
timeout = 30_000L, // Request timeout in milliseconds
maxRetries = 3, // Retry attempts
retryDelay = 1_000L, // Initial retry delay in milliseconds
)val result = client.detectBullying("Nobody likes you, just leave")
if (result.isBullying) {
println("Severity: ${result.severity}") // Severity.MEDIUM
println("Types: ${result.bullyingType}") // ["exclusion", "verbal_abuse"]
println("Confidence: ${result.confidence}") // 0.92
println("Rationale: ${result.rationale}")
}import ai.tuteliq.*
val result = client.detectGrooming(
DetectGroomingInput(
messages = listOf(
GroomingMessage(role = MessageRole.ADULT, content = "This is our secret"),
GroomingMessage(role = MessageRole.CHILD, content = "Ok I won't tell"),
),
childAge = 12,
)
)
if (result.groomingRisk == GroomingRisk.HIGH) {
println("Flags: ${result.flags}") // ["secrecy", "isolation"]
}
// Per-message breakdown (optional, returned on conversation-aware endpoints)
result.messageAnalysis?.forEach { m ->
println("Message ${m.messageIndex}: risk=${m.riskScore}, flags=${m.flags}, summary=${m.summary}")
}val result = client.detectUnsafe("I don't want to be here anymore")
if (result.unsafe) {
println("Categories: ${result.categories}") // ["self_harm", "crisis"]
println("Severity: ${result.severity}") // Severity.CRITICAL
}Runs bullying and unsafe detection:
val result = client.analyze("Message to check")
println("Risk Level: ${result.riskLevel}") // RiskLevel.SAFE/LOW/MEDIUM/HIGH/CRITICAL
println("Risk Score: ${result.riskScore}") // 0.0 - 1.0
println("Summary: ${result.summary}")
println("Action: ${result.recommendedAction}")val result = client.analyzeEmotions("I'm so stressed about everything")
println("Emotions: ${result.dominantEmotions}") // ["anxiety", "sadness"]
println("Trend: ${result.trend}") // EmotionTrend.WORSENING
println("Followup: ${result.recommendedFollowup}")import ai.tuteliq.*
val plan = client.getActionPlan(
GetActionPlanInput(
situation = "Someone is spreading rumors about me",
childAge = 12,
audience = Audience.CHILD,
severity = Severity.MEDIUM,
)
)
println("Steps: ${plan.steps}")
println("Tone: ${plan.tone}")import ai.tuteliq.*
val report = client.generateReport(
GenerateReportInput(
messages = listOf(
ReportMessage(sender = "user1", content = "Threatening message"),
ReportMessage(sender = "child", content = "Please stop"),
),
childAge = 14,
)
)
println("Summary: ${report.summary}")
println("Risk: ${report.riskLevel}")
println("Next Steps: ${report.recommendedNextSteps}")Pro tier ($99/mo)+ required — 5 credits per request —
POST /v1/verification/age
val ageResult = tuteliq.verifyAge(
document = File("id-front.jpg"),
selfie = File("selfie.jpg"),
method = VerificationMethod.COMBINED // DOCUMENT | BIOMETRIC | COMBINED
)
println(ageResult.verified) // true
println(ageResult.estimatedAge) // 15
println(ageResult.ageRange) // "13-15"
println(ageResult.isMinor) // true
println(ageResult.confidence) // 0.97Business tier ($349/mo)+ required — 10 credits per request —
POST /v1/verification/identity
val identityResult = tuteliq.verifyIdentity(
document = File("id-front.jpg"),
selfie = File("selfie.jpg")
)
println(identityResult.verified) // true
println(identityResult.matchScore) // 0.98
println(identityResult.livenessPassed) // true
println(identityResult.documentAuthenticated) // true
println(identityResult.isMinor) // falseReal-time voice analysis over WebSocket:
val session = client.voiceStream(
config = VoiceStreamConfig(
intervalSeconds = 10,
analysisTypes = listOf("bullying", "unsafe"),
),
handlers = VoiceStreamHandlers(
onReady = { println("Session ready: ${it.sessionId}") },
onTranscription = { println("Text: ${it.text}") },
onAlert = { println("Alert: ${it.category} (${it.severity})") },
onSessionSummary = { println("Summary: risk ${it.overallRisk}") },
),
)
session.connect()
// Send audio data
session.sendAudio(audioBytes)
// End session and get summary
val summary = session.end()
// Cleanup
session.close()All analysis result types include a creditsUsed field that indicates how many API credits were consumed:
val result = client.detectBullying("text to analyze")
println("Credits used: ${result.creditsUsed}")| Method | Credits |
|---|---|
detectBullying() |
1 |
detectUnsafe() |
1 |
detectGrooming() |
1 per 10 messages |
analyzeEmotions() |
1 per 10 messages |
getActionPlan() |
2 |
generateReport() |
3 |
analyzeVoice() |
5 |
analyzeImage() |
3 |
verifyAge() |
5 |
verifyIdentity() |
10 |
All methods support externalId and metadata for correlating requests:
val result = client.detectBullying(
content = "Test message",
externalId = "msg_12345",
metadata = mapOf("user_id" to "usr_abc", "session" to "sess_xyz"),
)
// Echoed back in response
println(result.externalId) // "msg_12345"
println(result.metadata) // {"user_id": "usr_abc", ...}val result = client.detectBullying("test")
// Access usage stats after any request
client.usage?.let { usage ->
println("Limit: ${usage.limit}")
println("Used: ${usage.used}")
println("Remaining: ${usage.remaining}")
}
// Request metadata
println("Request ID: ${client.lastRequestId}")import ai.tuteliq.*
try {
val result = client.detectBullying("test")
} catch (e: AuthenticationException) {
println("Auth error: ${e.message}")
} catch (e: RateLimitException) {
println("Rate limited: ${e.message}")
} catch (e: ValidationException) {
println("Invalid input: ${e.message}, details: ${e.details}")
} catch (e: ServerException) {
println("Server error ${e.statusCode}: ${e.message}")
} catch (e: TimeoutException) {
println("Timeout: ${e.message}")
} catch (e: NetworkException) {
println("Network error: ${e.message}")
} catch (e: TuteliqException) {
println("Error: ${e.message}")
}import ai.tuteliq.*
import kotlinx.coroutines.Dispatchers
import kotlinx.coroutines.withContext
class SafetyChecker(private val apiKey: String) {
private val client = Tuteliq(apiKey = apiKey)
suspend fun checkMessage(message: String): AnalyzeResult {
return withContext(Dispatchers.IO) {
client.analyze(message)
}
}
fun close() {
client.close()
}
}The bullying and unsafe content methods analyze a single text field per request. If your app receives messages one at a time, concatenate a sliding window of recent messages into one string before calling the API. Single words or short fragments lack context for accurate detection and can be exploited to bypass safety filters.
// Bad — each message analyzed in isolation, easily evaded
messages.forEach { msg ->
client.detectBullying(text = msg)
}
// Good — recent messages analyzed together
val window = recentMessages.takeLast(10).joinToString(" ")
client.detectBullying(text = window)The grooming method already accepts a messages list and analyzes the full conversation in context.
Enable PII_REDACTION_ENABLED=true on your Tuteliq API to automatically strip emails, phone numbers, URLs, social handles, IPs, and other PII from detection summaries and webhook payloads. The original text is still analyzed in full — only stored outputs are scrubbed.
Tuteliq supports 27 languages with automatic detection — no configuration required.
English (stable) and 26 beta languages: Spanish, Portuguese, Ukrainian, Swedish, Norwegian, Danish, Finnish, German, French, Dutch, Polish, Italian, Turkish, Romanian, Greek, Czech, Hungarian, Bulgarian, Croatian, Slovak, Lithuanian, Latvian, Estonian, Slovenian, Maltese, and Irish.
All 24 EU official languages + Ukrainian, Norwegian, and Turkish. Each language includes culture-specific safety guidelines covering local slang, grooming patterns, self-harm coded vocabulary, and filter evasion techniques.
See the Language Support docs for details.
- API Docs: docs.tuteliq.ai
- Discord: discord.gg/7kbTeRYRXD
- Email: support@tuteliq.ai
- Issues: GitHub Issues
MIT License - see LICENSE for details.
Tuteliq offers a free certification program for anyone who wants to deepen their understanding of online child safety. Complete a track, pass the quiz, and earn your official Tuteliq certificate — verified and shareable.
Three tracks available:
| Track | Who it's for | Duration |
|---|---|---|
| Parents & Caregivers | Parents, guardians, grandparents, teachers, coaches | ~90 min |
| Young People (10–16) | Young people who want to learn to spot manipulation | ~60 min |
| Companies & Platforms | Product managers, trust & safety teams, CTOs, compliance officers | ~120 min |
Start here → tuteliq.ai/certify
- 100% Free — no login required
- Verifiable certificate on completion
- Covers grooming recognition, sextortion, cyberbullying, regulatory obligations (KOSA, EU DSA), and more
Before you decide to contribute or sponsor, read these numbers. They are not projections. They are not estimates from a pitch deck. They are verified statistics from the University of Edinburgh, UNICEF, NCMEC, and Interpol.
- 302 million children are victims of online sexual exploitation and abuse every year. That is 10 children every second. (Childlight / University of Edinburgh, 2024)
- 1 in 8 children globally have been victims of non-consensual sexual imagery in the past year. (Childlight, 2024)
- 370 million girls and women alive today experienced rape or sexual assault in childhood. An estimated 240–310 million boys and men experienced the same. (UNICEF, 2024)
- 29.2 million incidents of suspected child sexual exploitation were reported to NCMEC's CyberTipline in 2024 alone — containing 62.9 million files (images, videos). (NCMEC, 2025)
- 546,000 reports of online enticement (adults grooming children) in 2024 — a 192% increase from the year before. (NCMEC, 2025)
- 1,325% increase in AI-generated child sexual abuse material reports between 2023 and 2024. The technology that should protect children is being weaponized against them. (NCMEC, 2025)
- 100 sextortion reports per day to NCMEC. Since 2021, at least 36 teenage boys have taken their own lives because they were victimized by sextortion. (NCMEC, 2025)
- 84% of reports resolve outside the United States. This is not an American problem. This is a global emergency. (NCMEC, 2025)
End-to-end encryption is making platforms blind. In 2024, platforms reported 7 million fewer incidents than the year before — not because abuse stopped, but because they can no longer see it. The tools that catch known images are failing. The systems that rely on human moderators are overwhelmed. The technology to detect behavior — grooming patterns, escalation, manipulation — in real-time text conversations exists right now. It is running at api.tuteliq.ai.
The question is not whether this technology is possible. The question is whether we build the company to put it everywhere it needs to be.
Every second we wait, another child is harmed.
We have the technology. We need the support.
If this mission matters to you, consider sponsoring our open-source work so we can keep building the tools that protect children — and keep them free and accessible for everyone.
Built with care for child safety by the Tuteliq team
