Implementation honesty, not just lint
Does this codebase
actually do what it claims?
Winnow scans a GitHub repo, extracts every feature claim from README, routes, and UI, and matches each claim against real implementation evidence. It produces an Implementation Integrity Score with exact file-and-line citations.
Scans run
0
Flagged as AI slop
0
Avg integrity score
0/100
Deterministic first
Every finding is produced by a rule you can read, not an LLM hunch. Regex, AST patterns, claim-vs-evidence diffs.
Claim-aware
Extracts features from README, page titles, button text, and env names. Verdicts cite the exact line that makes (or fakes) the claim.
Practical impact
Reports explain user-visible consequences. "Users may click Fix and nothing real happens" — not just rule IDs.
Recent scans
Sign in to start scanning codebases.