EST. 2024 · Suwon, KR Systems operational
16:28:15 KST
Forensic AI · Media integrity · Generative defenseIndex / Introduction · 001

SECURE
MACHINES
LAB.

We build the detection layer for the generative era. Model- and data-free deepfake forensics, watermarking defenses, and tools to trace synthetic media as it spreads.

DASH Lab · SKKU
Explore the systems ↓
Deepfake detectionModel-free forensicsGeneration source blockingPropagation predictionAI forensics assistantSynthetic watermarkingMedia provenanceAdversarial robustnessDeepfake detectionModel-free forensicsGeneration source blockingPropagation predictionAI forensics assistantSynthetic watermarkingMedia provenanceAdversarial robustness
99.4%
Detection AUROC
CVPR’24 bench
<120ms
Analysis latency
per frame @ 1080p
14
Peer-reviewed publications
since 2023
Synthetic samples analyzed
and counting
02 / Systems
Products

Safer AI.
Secure world.

Pioneering forensic tools for researchers, newsrooms, and platform trust teams. Our systems run landmark, frequency, and mesh-level analysis in real time — click through to launch the live terminal.

01
Forensic analysis
Deepfake Detector
Live

A model-agnostic pipeline for spotting AI-generated and manipulated images and video. Trained on 2M+ synthetic samples across 40+ generators.

99.4%
AUROC
118ms
Latency
40+
Models covered
Launch Deepfake
02
Research copilot
Forensics Assistant
Live

A conversational agent for media integrity analysts — upload, cross-check, cite. Backed by our detection stack and an evolving evidence graph.

12.4k
Evidence sources
1.2s
Response time
8
Languages
Launch Forensics
In the field
Newsrooms
Verify user-submitted footage before publish.
Platforms
Flag synthetic uploads at scale, at the edge.
Researchers
Benchmark new detection methods against ours.
Policy
Audit evidence for investigations and regulation.
03 / Research
What we study

Core
Research.

Pioneering the next generation of AI security through foundational research and practical forensic implementation. Three parallel threads — detect, prevent, predict.

Deepfake detection & AI forensics
DASH · CORE · STABLE01

Deepfake detection & AI forensics

Advanced forensics for identifying deepfakes and manipulated content without requiring extensive training datasets or prior knowledge of the generative model.

CVPR ’25NeurIPS ’24
Generation source blocking
SML · ALPHA · ACTIVE02

Generation source blocking

Implementing architectural safeguards within AI models to prevent the creation of deepfakes at the foundational level — making the synthetic visibly synthetic, at the source.

ICML ’25USENIX Sec.
Propagation prediction
NODAL · BETA · MONITORING03

Propagation prediction

Heuristic AI models designed to track, analyze, and predict the viral spread of synthetic content across digital networks. Intervene before the cascade.

WWW ’25KDD ’24
Selected publications
Recent work
All publications →
About the lab

Born from DASH Lab at Sungkyunkwan University, grown into a company that ships forensic infrastructure.

We're researchers first. We write the papers, build the benchmarks, and then harden the best ideas into production systems that newsrooms, platforms, and policymakers actually deploy. Our thesis: generative AI is here to stay — so the web needs a trust layer that moves at the same speed.

Founded
2024
HQ
Suwon, KR
Affiliation
SKKU · DASH Lab
Team
12 researchers
Funded by
NRF, MSIT, angels
Hiring
Open (ML, infra)
05 / Contact

Let's make
the web honest
again.

Researchers, trust teams, newsrooms, policymakers — if you're working on synthetic media from any angle, we want to hear from you.

General
COPY
hello@securemachineslab.com
Partnerships, press, platform integrations
Research
COPY
swoo@g.skku.edu
Academic collaboration, paper discussions
Careers
COPY
careers@securemachineslab.com
Hiring ML / infra / research engineers
Press
COPY
press@securemachineslab.com
Media requests, on-background briefings
HQ
Suwon-si, Gyeonggi-do
Sungkyunkwan University · DASH Lab · South Korea