Your Code Stopped Changing. The AI Didn't.
A single scan is a coin flip. Some vulnerabilities are only found in 8 out of 100 runs. Continuous scanning means you don't have to get lucky, you just have to wait.
Why One Scan Isn't Enough
AI security analysis is non-deterministic. The same model analyzing the same code will explore different reasoning paths each time, and some paths lead to bugs that others miss entirely.
1 Scan
chance of catching a hard-to-find vulnerability in a single run
10 Scans
you're still worse than a coin flip after ten attempts
40 Scans
near-certain detection. Continuous scanning gets you here automatically.
The Math of Repeated Runs
In 2025, security researcher Sean Heelan used OpenAI's o3 to find CVE-2025-37899, a remote zero-day in the Linux kernel's SMB implementation. The results quantified what continuous scanning practitioners already knew:
“o3 finds the kerberos authentication vulnerability in the benchmark in 8 of the 100 runs. In another 66 of the runs o3 concludes there is no bug present in the code (false negatives), and the remaining 28 reports are false positives.”
Same vulnerability. Same code. Different models, different odds.
Newer models find more, but only if you keep scanning.
Continuous Fuzzing, But for AI Security
The security industry already learned this lesson with fuzzing. The same principle applies to AI-driven vulnerability research.
Random inputs eventually hit edge cases
Fuzzers don't find every bug on the first run. They generate random inputs continuously, and over time they explore code paths that trigger crashes. That's why you run them for days, not minutes.
Run 1,000: no crash
Run 14,392: buffer overflow found
Different reasoning paths find different bugs
LLMs explore different reasoning paths each run. One run might focus on data flow, another on constraint logic, another on template interactions. Some paths lead to vulnerabilities others miss entirely.
Run 12: no finding
Run 23: under-constrained signal found
Three Reasons Results Improve Over Time
Even on unchanged code, your security coverage gets better every month.
Upgraded AI Models
o3 is 2-3x better than Claude Sonnet 3.7 on the same benchmark. As models improve, we automatically use them on your codebase. A bug that was unfindable in January may be routine by June.
New Vulnerability Patterns
Every real-world audit we complete teaches us new vulnerability patterns. These get encoded into our analysis agents and run against your codebase on the next scan.
Probabilistic Discovery
LLM analysis is non-deterministic by nature. Each run explores different reasoning paths. Some vulnerabilities only surface in a fraction of runs, but with enough runs, they surface reliably.
The Zero-Day That Was a Side Effect
When Sean Heelan ran o3 against 12,000 lines of Linux kernel SMB code, the model found the original benchmark vulnerability in just 1 of 100 runs. But buried in the other 99 runs of “noise” was something unexpected.
CVE-2025-37899
A previously unknown remote zero-day vulnerability in the Linux kernel's SMB implementation, discovered as a side effect of running analysis 100 times.
The vulnerability was hiding in what initially looked like false positive reports. It took repeated runs and careful triage to surface it. This is the power of continuous analysis: you don't just find the bugs you're looking for, you find the ones you didn't know existed.
How zkao Implements Continuous Security
Connect once. We handle the rest.
Connect your repository
Point zkao at your GitHub repo. We detect your Circom circuits and set up the analysis pipeline automatically.
Scans run on a schedule
We run analysis continuously: on a schedule, when we upgrade models, and when we add new vulnerability patterns from real audits.
Deduplicated findings
You never see the same bug twice. New findings are surfaced as they're discovered, even months after the initial scan.
Improving coverage, automatically
As AI models get better and we learn new patterns from our 100+ real-world audits, your codebase gets analyzed with increasingly sophisticated techniques. No action required from you.
“But My Code Isn't Ready”
The worst time to set up continuous scanning is after you needed it. The best time is now, even if your code is a mess.
Mid-Refactor?
Set it up now, ignore early results. When your clean code lands, scanning picks up automatically. No need to remember to come back. Your future self will thank you.
Code Too Messy to Triage?
Use early findings as a regression checklist. If we find 15 bugs in the old code, you can verify each one is actually fixed in the new code. Messy findings now become a free test suite later.
Refactors Introduce New Bugs
The riskiest time for your codebase is right after a major rewrite. Having scanning already running means you catch refactor-introduced vulnerabilities immediately, not months later.
Think of it like setting up CI before your tests are green. You don't wait for perfect code to set up the pipeline. you set up the pipeline so you know when the code is ready.
One-Shot vs. Continuous
The difference between scanning once and scanning continuously.
| One-Shot Scan | Continuous | |
|---|---|---|
| Hard-to-find bugs | 8% chance | 96%+ chance |
| New model capabilities | Missed | Automatic |
| New vulnerability patterns | Missed | Added continuously |
| Unexpected discoveries | Unlikely | More runs = more chances |
| Effort after setup | Manual re-runs | Zero |
Set It and Forget It
Connect your repo once. We'll keep finding bugs for as long as you want us to.
Start Continuous Scanning