Truth Engine is a defense-in-depth digital verification platform built at the MenaCraft Hackathon, where it won 1st Place and the 1200 DT prize. It evaluates any piece of digital media across three independent evidence axes β Content Authenticity, Contextual Consistency, and Source Credibility β then fuses all signals through an explainable AI layer (Llama 3.3 70B chain-of-thought) into a single 0β100 Trust Score with a CRITICAL/HIGH/MEDIUM/LOW risk level. The architecture spans a Next.js 15 frontend with an interactive 3D Trust Sphere (Spline embed), a NestJS API gateway orchestrating 7 FastAPI microservices, a Trust Shield Chrome Extension for instant URL verdicts, and a TruthStamp service for blockchain-based provenance minting. Every AI decision is surfaced in a human-readable Glass Box CoT report β no black boxes.
The digital information ecosystem is flooded with deepfakes, out-of-context media, and unverifiable sources. Existing tools address at most one signal: they check authenticity OR source OR context β never all three together. Journalists, fact-checkers, and high-stakes decision-makers need a single platform that can ingest an image, a text claim, and a source URL, and return an evidence-backed, explainable verdict within 30 seconds β without sending data to opaque third-party services.
- 01
Designed a three-axis verification pipeline orchestrated by a NestJS gateway (port 8000). Axis I (Content Authenticity, port 8001): extracts C2PA/EXIF metadata, runs forensic deepfake detection via HuggingFace Inference API, audits for integrity clashes between metadata and pixel-level content, and applies ImBD stylometric analysis to detect AI-generated text patterns.
- 02
Built Axis II (Contextual Consistency, port 8002): the Cloudflare Workers AI Llama 3.2 Vision 11B model generates a natural-language description of the uploaded media, which is then embedded alongside the user's claim using BGE embeddings (Cloudflare Workers AI). Cosine similarity between description and claim exposes semantic mismatches β flagging images used with fabricated captions.
- 03
Engineered Axis III (Source Credibility, port 8003): a multi-signal OSINT pipeline that runs WHOIS domain-age lookup (python-whois), VirusTotal threat intelligence scan, and Google Safe Browsing check against any submitted source URL, producing a composite credibility score and risk assessment with per-source breakdown.
- 04
Built the LLM Service (port 8004) as a unified Cloudflare Workers AI client: Llama 3.3 70B for chain-of-thought Glass Box report generation from all three axes' structured outputs, Vision 11B for VLM description, and BGE embeddings for semantic similarity. The CoT report surfaces axis-level confidence, detected anomalies, and a plain-language summary β so any user can understand *why* the verdict was reached.
- 05
Added AuthorDNA (port 8006) for text-only authorship fingerprinting: the service compares writing samples against a reference to build a stylometric profile and flag authorship inconsistencies. Built Trust Shield (port 8005) as both a backend service for extension-first fast verdicts and a Chrome Extension that injects real-time trust scores on any page. Implemented TruthStamp (port 8006) for verifiable blockchain provenance: mints and looks up cryptographic stamps anchoring verified content to an immutable record.
Next.js 15 App Router frontend (Spline 3D Trust Sphere, react-dropzone upload, Recharts axis charts, Zustand state, Framer Motion) sends multipart analysis requests to the NestJS Core gateway (port 8000). The gateway orchestrates parallel calls to 7 FastAPI microservices: Authenticity (:8001) for C2PA/EXIF + HuggingFace deepfake + ImBD stylometry, Context (:8002) for Llama 3.2 Vision + BGE cosine similarity, OSINT (:8003) for WHOIS + VirusTotal + Safe Browsing, LLM (:8004) as the Cloudflare Workers AI proxy (Llama 70B + Vision 11B + BGE embeddings), AuthorDNA (:8006) for text authorship fingerprinting, TrustShield (:8005) for fast extension verdicts, and TruthStamp (:8006) for blockchain provenance minting. The Chrome Extension (Trust Shield) calls TrustShield directly for per-URL inline verdicts. Docker Compose orchestrates all services; Helm chart ships a Kubernetes deployment for production.
App Router frontend β interactive Trust Sphere via @splinetool/react-spline, Recharts radar/axis charts, Zustand state, react-dropzone, Framer Motion
Orchestrates parallel calls to all 7 downstream FastAPI services and aggregates the VerificationReport schema (trust score 0β100, risk level, per-axis scores)
C2PA/EXIF metadata extraction, HuggingFace Inference API deepfake detection, integrity clash auditor, ImBD stylometric AI-text analyzer
Cloudflare Workers AI Llama 3.2 Vision 11B for VLM description + BGE embeddings for cosine similarity between description and user claim β detects fabricated captions
WHOIS domain-age lookup, VirusTotal threat intelligence, Google Safe Browsing β composite credibility score with per-source breakdown
Cloudflare Workers AI β generates plain-language chain-of-thought Glass Box report fusing all three axes into one explainable verdict
Injects real-time trust scores into any webpage via the TrustShield service (:8005) β fast verdict without a full analysis round-trip
TruthStamp (:8006) mints and verifies blockchain provenance stamps; AuthorDNA (:8006) builds stylometric author fingerprints to flag writing-sample inconsistencies
MenaCraft Hackathon β 1200 DT prize, hosted at FacultΓ© des Sciences de Tunis
Content Authenticity + Contextual Consistency + Source Credibility β evaluated independently then fused
NestJS gateway + 7 FastAPI services: Authenticity, Context, OSINT, LLM, AuthorDNA, TrustShield, TruthStamp
Llama 3.3 70B Glass Box CoT report surfaces every axis score and detected anomaly in plain language
All three axes run in parallel β deepfake scan, VLM description, OSINT, and CoT report in one round-trip
Trust Shield extension delivers inline per-URL trust scores directly in the browser

π 1st Place β MenaCraft Hackathon award ceremony at FacultΓ© des Sciences de Tunis

Team Sa7absisa celebrating 1st Place with the 1200 DT prize
Running all three verification axes in parallel was the critical architectural decision β sequential analysis would have exceeded the 30-second demo time limit. NestJS's Promise.all orchestration kept the full pipeline under 30 seconds even with the Cloudflare Workers AI round-trips.
The Glass Box CoT report was the feature that won the hackathon. Judges could read *why* the verdict was CRITICAL, not just see a red score β that transparency is what separates a credible verification tool from a confidence meter.
Using Cloudflare Workers AI for all LLM/VLM calls (Llama 3.3 70B, Vision 11B, BGE embeddings) was the right call for a free-tier hackathon: zero cold starts, no GPU setup, and the free tier comfortably handled all demo traffic without a single rate-limit error.
The ImBD stylometric analyzer was an underrated Axis I feature β most deepfake detectors focus on images, but text-based AI detection (AI-generated articles, fake quotes) is equally important and almost no existing tool covers both in one pipeline.
Trust Shield as a Chrome Extension transformed the demo from 'upload a file' to 'browse any news site and get real-time trust scores' β that live demo on actual misinformation pages was the moment the judges understood the product's value proposition.