tells · data flow
Where every byte of a tells analysis goes — from your browser to our backend, through the tells AI v1.0 inference layer, and back. Drawn for vendor-due-diligence reviewers; the underlying inference sub-processor is named in §5 below for legal-disclosure completeness.
1. End-to-end diagram
The full path of a single analysis request, from the moment you press "decode" until the response renders in your browser:
USER MESSAGE INPUT
│ (you paste a message / draft / profile excerpt into the tells UI)
│
▼ HTTPS · TLS 1.3 · HSTS preload
NGINX REVERSE PROXY (Eilat VPS, Israel)
│ · enforces TLS, rate limits, security headers
│ · forwards via loopback only — no inter-host traffic
│
▼ loopback (127.0.0.1)
FASTAPI BACKEND (in-memory processing only)
│ · validates the request via Pydantic
│ · resolves the per-user spotlight + cultural framing
│ · constructs the system prompt (open-source: voidd0/tells-prompt-templates)
│ · runs locale-aware crisis-keyword scan (resources from voidd0/tells-crisis-resources)
│
▼ HTTPS · TLS 1.3 · enterprise data terms
TELLS AI v1.0 INFERENCE LAYER (sub-processor: vøiddo AI infrastructure · US/EU regions)
│ · receives system prompt + user content
│ · NOT used for model training (enterprise terms)
│ · stateless per request; no cross-tenant retention
│
▼ HTTPS response
FASTAPI VALIDATION (Pydantic schema check on tells AI v1.0 output)
│ · rejects malformed JSON, retries once with "fix your output" prompt
│ · attaches crisis-resource block if keyword scan triggered
│ · token + cost accounting written to internal cost ledger (no content)
│
▼ HTTPS response
USER BROWSER (rendered output in your tells session)
│
└─ Default tier: input + output discarded after response. No retention.
└─ Patterns mode (Pro / Forensic opt-in): output encrypted at rest, AES-256-GCM with
per-user HKDF key + AAD-bound context.
2. Data at rest
What we keep
- Default tier: nothing of your input. Output is held until your billing cycle's history limit; hard-deleted after 90 days regardless of tier.
- Patterns mode (Pro / Forensic opt-in): encrypted snapshot per tracked person, AES-256-GCM, per-user HKDF-derived key, AAD bound to
(user_id, tracked_person_id, created_at). - History mode (Pro / Forensic opt-in): encrypted analysis records, retention up to 12 months max with explicit checkbox.
- Account row: email (lowercase), bcrypt password hash (cost 12), preferred language, plan tier, billing-cycle counters, signup IP (fraud-flag use only).
- Operational metadata: request timestamps, token counts, model used, cost ledger entries.
What we never store
- Raw input message text — discarded immediately after the tells AI v1.0 inference response is received and validated.
- Plaintext logs of input or output content. Logs use hashed identifiers, never content.
- Browser fingerprints, device IDs, advertising identifiers.
- Cross-site identifiers, third-party cookies, marketing pixels.
- Voice recordings, video, biometric data — tells is text-only.
3. Data not tracked
tells deliberately does not run the analytics stack a typical SaaS would. The list below is not aspirational; it is the actual configuration of the production deployment.
- No third-party analytics scripts on content pages. The dashboard, analyze flows, and settings pages load no third-party JavaScript at all.
- Page-view-only analytics on the marketing site. We use Plausible for landing page analytics: page-view counts and referrer hostnames only. Plausible does not set cookies and does not see content.
- No advertising pixels. Zero Facebook Pixel, Google Ads conversion tracking, TikTok Pixel, LinkedIn Insight Tag, Reddit Pixel.
- No Google Analytics. Anywhere on tells.voiddo.com.
- No session replay. No FullStory, Hotjar, LogRocket, Microsoft Clarity, or equivalent.
- No behavioural attribution chains. We do not correlate page views to authenticated user IDs.
4. Data sent to the tells AI v1.0 inference layer — exact contract
The system prompt is open-sourced at voidd0/tells-prompt-templates so you can verify exactly what the inference layer sees. The cultural framing layer is open-sourced at voidd0/tells-cultural-framing.
For each analysis, the tells AI v1.0 inference layer receives:
- The system prompt for the requested mode (read-message / read-person / read-profile / mirror / voice-coach / decisions).
- The cultural framing JSON for your selected language.
- The text content you submitted, verbatim.
- A response schema (so the output is structured JSON we can validate).
The inference layer does not receive: your email, your IP, your account ID, your tracked-person labels, your billing tier, your signup date, or any cross-request identifier. Each call is stateless.
5. Sub-processors
The full list of third parties that touch any tells data, with their data-protection terms, is at sub-processors. The two that touch user content are vøiddo AI infrastructure (the inference infrastructure sub-processor for tells AI v1.0) and Sentry (stack traces with PII scrubbing on).
6. Open-source verification path
You can verify these claims against published source:
- github.com/voidd0/tells-encryption-spec — the AES-256-GCM + HKDF + AAD specification.
- github.com/voidd0/tells-prompt-templates — every system prompt that ships to the tells AI v1.0 inference layer.
- github.com/voidd0/tells-cultural-framing — the per-language framing layer.
Backend application code, business logic, and database schemas are NOT open-sourced. The split — publish privacy primitives, keep business logic private — gives you the verification surface without exposing competitive implementation.