Public scoreboard

Every detection rate, against published ground truth.

RedVolt's default audit is end-to-end autonomous AI, no human in the loop. Most "AI security" vendors don't publish their detection numbers because the comparison against public ground truth would be uncomfortable. We do. Every benchmark below is measured against a published Code4rena report or the OWASP Top 10 catalog — anyone can verify the numbers by re-running our engine on the same source.

7/7
BakerFi HIGH
8/8
veRWA HIGH
6/6
Wildcat HIGH
100% + 90%
Jito Critical + HIGH

All eight benchmarks

click a row to read the full breakdown

These are autonomous numbers

No human reviewed the engine's findings before they were scored against ground truth. Each benchmark's linked blog walks through every detection and every miss, mapped against the published Code4rena or OWASP catalog. We publish the misses too — that's how you tell whether a benchmark is honest.

For clients who want a senior human auditor to review the AI findings on top, the optional Expert Review tier is available as a separate paid add-on. It is not part of the numbers above.

See the audit product