SECUREMACHINESLAB.
We build the detection layer for the generative era. Model- and data-free deepfake forensics, watermarking defenses, and tools to trace synthetic media as it spreads.
Safer AI.
Secure world.
Pioneering forensic tools for researchers, newsrooms, and platform trust teams. Our systems run landmark, frequency, and mesh-level analysis in real time — click through to launch the live terminal.
A model-agnostic pipeline for spotting AI-generated and manipulated images and video. Trained on 2M+ synthetic samples across 40+ generators.
A conversational agent for media integrity analysts — upload, cross-check, cite. Backed by our detection stack and an evolving evidence graph.
Core
Research.
Pioneering the next generation of AI security through foundational research and practical forensic implementation. Three parallel threads — detect, prevent, predict.

Deepfake detection & AI forensics
Advanced forensics for identifying deepfakes and manipulated content without requiring extensive training datasets or prior knowledge of the generative model.

Generation source blocking
Implementing architectural safeguards within AI models to prevent the creation of deepfakes at the foundational level — making the synthetic visibly synthetic, at the source.

Propagation prediction
Heuristic AI models designed to track, analyze, and predict the viral spread of synthetic content across digital networks. Intervene before the cascade.
Born from DASH Lab at Sungkyunkwan University, grown into a company that ships forensic infrastructure.
We're researchers first. We write the papers, build the benchmarks, and then harden the best ideas into production systems that newsrooms, platforms, and policymakers actually deploy. Our thesis: generative AI is here to stay — so the web needs a trust layer that moves at the same speed.
Let's make
the web honest
again.
Researchers, trust teams, newsrooms, policymakers — if you're working on synthetic media from any angle, we want to hear from you.
