Fighting LLM Spam in Open Source with a “Web of Trust”
Imagine submitting code that looks perfect — follows the style guide, passes the linting, even includes tests. But underneath, it has a subtle logic error that will only surface in production, three weeks later, when nobody’s looking.
That’s the problem Tangled, a code collaboration platform, is tackling with something it calls a “vouching” system — essentially a trust network to combat the growing tide of LLM-generated code submissions that look correct but are “subtly wrong.”
The system went live on May 1, 2026, and it’s a thoughtful response to a problem that every open source maintainer is starting to face: the barrier to submitting code has never been lower because AI tools are so good at generating code that looks right at a glance.
“The bar to submit code to a project has never been lower thanks to LLM based tooling. LLM tools are really good at creating ‘uncanny valley’ submissions. Code that looks correct but is subtly wrong.”
— Tangled Labs blog post on vouching, May 1, 2026
How It Works
Here’s the mechanics: when a maintainer reviews a contributor’s work, they can vouch for or denounce that contributor. These are public records stored on your Personal Data Server (PDS), with an optional text reason field for context.
What happens next is the clever part: a Tangled “appview” service aggregates all this vouching data across the network and displays visual “hats” over user profiles at issues, pull requests, and comments. A green shield means someone in your trust circle vouched for them. A red warning means someone you trust denounced them.
But here’s the key design decision: attenuation. You only see decisions made by people you trust — and the people they trust. It’s a transitive trust graph, not a global scoreboard. This avoids the problems that plague review-based reputation systems like GitHub’s own review history, where a single bad interaction or a bot can permanently damage someone’s standing.
“Start building your web of trust on Tangled today.”
The planned features reveal some interesting thinking too. Vouches decay over time as maintainers move between projects. And — this is the one I like — vouching for a user right after merging a PR could automatically attach that PR as evidence in the vouch record. It turns good code reviews into permanent trust signals.
Why This Matters
The “uncanny valley” of AI-generated code is a real and growing problem. LLMs are excellent at producing code that looks professional — correct indentation, proper variable naming, even appropriate comments — but they sometimes get the logic wrong in ways that are hard to spot without running the code.
For open source maintainers, this means reviewing code takes longer now. Every submission needs to be read line-by-line, not just accepted because it “looks like what I would write.” That’s expensive — and as AI-generated submissions become more common, it’s becoming unsustainable for small volunteer teams.
A web of trust system offers a different approach: instead of requiring every maintainer to be a detective, make the track record of contributors publicly visible so you can prioritize your attention where trust is low.
Not the Only Solution
Of course, this is a platform-specific solution built on Tangled’s PDS-based architecture. The question remains whether something like this could become a broader web standard — something that could span GitHub, GitLab, and other platforms too. The underlying idea is protocol-agnostic: trust relationships, public attestation, and visual indicators.
But the real test will be adoption. If only a fraction of maintainers use the system, its signal-to-noise ratio might not be worth the cognitive overhead of checking trust indicators. It needs critical mass — which is harder than you’d think in the fragmented open source ecosystem.
Still, it’s an interesting approach. In a world where generating bad code has gotten free and instant, maybe the answer isn’t better code reviewers — it’s better trust signals.
Sources:
– Tangled: Combat LLM spam by building a web of trust — Original announcement, May 1, 2026
– Tangled Labs — Context on the platform