Category: AI & Tools

  • NHS England Slams the Door on Open Source — And Why It’s Probably Too Late

    NHS England Slams the Door on Open Source — And Why It’s Probably Too Late

    NHS England has ordered all its technology leaders to make hundreds of GitHub repositories private by May 11, citing fears that emerging AI models could exploit publicly exposed source code. The directive represents a dramatic reversal of a longstanding policy that treated open source as the default for taxpayer-funded software.

    What Happened

    According to internal guidance seen by The Register, NHS England’s Engineering Board approved new instructions requiring all publicly accessible source code repositories to be set to private “unless there is an explicit and exceptional need.” The guidance specifically names Anthropic’s Mythos model as a concern — a frontier AI capable of large-scale code ingestion, inference, and automated vulnerability detection.

    The full guidance reads: “Public repositories materially increase the risk of unintended disclosure of source code, architectural decisions, configuration detail, and contextual information that may be exploited – particularly given rapid advancements in AI models capable of large-scale code ingestion, inference, and reasoning.”

    NHS England told The Register this is “merely a temporary measure” enacted while the organisation assesses the cybersecurity impact of AI developments. They added: “We will continue to publish source code where there is a clear need.” But no timeline was given for when the restriction might be lifted.

    The Mythos Factor

    Mythos, Anthropic’s code analysis model, is the specific trigger here. The model can ingest entire codebases, reason about their architecture, and identify vulnerabilities — essentially acting as an automated penetration tester at scale. For organisations with public codebases, that means any vulnerability that a human auditor might miss could be surfaced by AI in seconds.

    The irony is that Mythos was designed partly for defensive use — helping developers find and fix bugs before they reach production. But NHS England’s internal sources told The Register that very few of its hundreds of open repositories contain anything remotely sensitive. The repos include documentation, architecture diagrams, and code for internal tools like web apps for managing clinic times.

    A Policy Written in Sand

    The NHS’s service manual — reflecting wider UK government policy — previously stated that all new source code should be made open source and shareable under an appropriate licence. The reasoning was straightforward:

    “Public services are built with public money. So unless there’s a good reason not to, the code they’re based on should be made available for other people to reuse and build on.”

    The manual went further: “Open source code can save teams duplicating effort and help them build better services faster. And publishing source code under an open licence means that you’re less likely to get locked in to working with a single supplier.”

    This was the philosophy behind NHSX and NHS Digital’s open source work — a body of code developed by some of the UK’s brightest engineers that was freely available for other health services, local authorities, and even other countries to reuse.

    But signs of wavering appeared late last year when The Register reported that NHS England was quietly deleting web pages devoted to communicating its approach to open source. The organisation blamed routine cleanup after NHSX and NHS Digital were folded into NHS England — but the pattern is clear.

    The Backlash Is Already Building

    The developer community response has been swift and critical. The open source community sees this as exactly the kind of vendor lock-in risk that the original policy was designed to prevent. By closing the door on transparency, NHS England is handing more power to the proprietary software suppliers it contracts with — organisations like Palantir, which already has a controversial data platform deal with the NHS.

    And here’s the kicker: the security argument doesn’t hold water for most of the repos in question. Security through obscurity is a well-worn fallacy — if the code is genuinely insecure, making it private doesn’t fix the vulnerability, it just hides it from the people who could help patch it. The open source model means hundreds of eyes (and increasingly, AI tools used defensively) can find bugs before bad actors do.

    Too Late

    NHS England’s own service manual noted that open source “means that you’re less likely to get locked in to working with a single supplier.” By reversing course, the NHS is walking straight into that lock-in. The talent that built those open repos — engineers who chose to make taxpayer-funded code public — now have their work locked behind corporate firewalls, accessible only to the consultants and contractors paid to maintain it.

    It’s a textbook example of security theatre: a visible action that makes leadership feel something is being done, while actually weakening the system’s long-term resilience.

    Sources: The Register (Connor Jones, May 5), Health Service Journal (Ben Clover), Computing, New Scientist

  • Google Chrome Is Silently Downloading a 4GB AI Model to Your Computer — And You Can’t Really Stop It

    Google Chrome Is Silently Downloading a 4GB AI Model to Your Computer — And You Can’t Really Stop It

    This is the kind of story that makes you question whether you’re the user of your browser, or the product. Starting with Chrome 136, Google has been quietly downloading a 4GB Gemini Nano model to users’ machines — without consent, without opt-in, and with no consumer-level way to opt out.

    What’s Happening

    According to reports from The Register, Malwarebytes (by Pieter Arntz, May 6), TechPowerUp, and security researcher Alexander Hanff at That Privacy Guy, Chrome is pulling down the full 4GB model file to local storage automatically.

    The model is Gemini Nano — Google’s smaller on-device AI model designed for tasks like summarising web content, drafting text, and answering questions. The idea being that it runs locally for privacy and speed, without sending your data to Google’s servers.

    In theory. In practice, you never asked for it, you weren’t told it was happening, and the only way to disable it is through enterprise-level group policy — something most individual users don’t have access to.

    Why This Matters

    A few reasons this has got the privacy and security communities worked up:

    • 4GB is not nothing. On a system with 16GB of RAM, that’s a significant chunk of storage sitting idle most of the time. On lower-spec machines or those with smaller SSDs, it’s genuinely impactful.

    • No opt-in, no opt-out. The download happens automatically with Chrome updates. There’s no setting in chrome://settings that lets you say “no thanks.” The only way to disable it is via enterprise group policy (GeminiNanoDisabled), which requires a domain-joined machine or manually editing the Windows registry / Linux policy files.

    • The climate argument. As Hanff pointed out, “at a billion-device scale the climate costs are insane.” Billions of 4GB downloads, most of which may never be used by the end user. That’s terabytes of unnecessary data transfer.

    • The precedent. This is a browser silently installing AI capabilities on your machine. It’s a small step from “here’s a helpful summarisation feature” to “here’s a model that’s analysing everything you read.” The infrastructure is there now.

    How to Disable It (If You’re Technical)

    For Windows users who don’t have enterprise group policy:

    1. Open Registry Editor (regedit)
    2. Navigate to HKEY_LOCAL_MACHINESOFTWAREPoliciesGoogleChrome
    3. Create a new DWORD value called GeminiNanoDisabled and set it to 1
    4. Restart Chrome

    Linux users can set the policy via /etc/chromium/policies/managed/ with a JSON file containing:

    {"GeminiNanoDisabled": true}

    Or you can just use a different browser. Firefox doesn’t have this problem.

    The Bigger Picture

    This isn’t the first time Google has faced pushback over silent AI rollouts. The company has been aggressively embedding AI across its products — Search, Docs, Gmail, and now Chrome itself — often with the “we’ll figure out the consent later” approach that’s become standard in the AI industry.

    What makes this particular case noteworthy is that it’s happening inside a web browser — the most fundamental piece of software on your computer. Chrome has roughly 65% market share on desktop. When your browser decides to install a 4GB neural network without asking, there’s not much you can do about it without abandoning the ecosystem entirely.

    The good news? The model is genuinely useful for the tasks it’s designed for, and the on-device processing means your browsing data stays on your machine. The bad news is that the choice shouldn’t be Google’s to make — it should be yours.

    Worth keeping an eye on. This could be the canary in the coal mine for a broader trend of browsers becoming opinionated AI platforms, whether we want them to or not.

  • Google’s Gemma 4 Gets a 3x Speed Boost

    Google’s Gemma 4 Gets a 3x Speed Boost — Here’s Why It Matters

    Google dropped something genuinely useful in the open-source AI world this week: Multi-Token Prediction drafters for Gemma 4 that deliver up to a 3x inference speedup, published by Olivier Lacombe (Director, Product Management) and Maarten Grootendorst (Developer Relations Engineer) on Google’s own blog yesterday, May 5, 2026.

    For context: the original Gemma 4 models shipped just weeks ago and already hit 60 million downloads. Google’s not resting on that — they’re now releasing MTP drafters that make those same models run significantly faster on everything from consumer GPUs to mobile edge devices.

    How It Works

    The core problem is familiar to anyone who’s run a local LLM: inference is memory-bandwidth bound. With a 31B parameter model sitting in your VRAM, the GPU spends most of its cycles just shuttling weights around rather than doing useful compute. The result is frustratingly slow token generation, especially on consumer-grade GPUs.

    Speculative decoding solves this by pairing a heavy target model with a lightweight “drafter” model. The drafter predicts several future tokens at once — fast, because it’s small. The target model then verifies all those predictions in a single parallel pass. When the drafter is right (and it’s surprisingly often right on obvious continuations), you get multiple tokens output for the cost of one forward pass.

    From the Google blog:

    “The target model agrees with the draft, it accepts the entire sequence in a single forward pass — and even generates an additional token of its own in the process.”

    This means your app can output the full drafted sequence plus one token in the time it usually takes for a single token.

    The Numbers

    Google tested the MTP drafters across multiple inference runtimes — LiteRT-LM, MLX, Hugging Face Transformers, and vLLM — and the speedups are substantial. The drafters are available for the full Gemma 4 family:

    • 26B Mixture of Experts and 31B Dense models for workstations and consumer GPUs
    • E2B and E4B models for edge devices where every millisecond of battery counts

    The best part: Google emphasizes zero quality degradation. Because the primary Gemma 4 model retains final verification of every token, you get identical frontier-class accuracy — just delivered significantly faster.

    Why I’m Excited

    This is the kind of technical advance that matters more than another benchmark score. The Gemma 4 models with MTP drafters make running capable open models locally actually feasible for the people who need it most — developers building coding assistants, autonomous agents, and on-device AI.

    If you’ve been sitting on a GPU wondering whether local LLMs are practical for production workloads, this changes the calculus. A 3x speed boost doesn’t just make things faster — it makes things that would have been too slow suddenly viable.

    The MTP drafters are available through the standard Gemma 4 channels. If you’re running Gemma 4 locally, update your setup.


    Source: Google’s blog post on Gemma 4 MTP drafters

  • The Day AI Beat Doctors at Emergency Triage (And The Real Story Is Even Weirder)

    The Day AI Beat Doctors at Emergency Triage (And The Real Story Is Even Weirder)

    A Harvard study published in Science on April 30 just went mainstream, but the treatment planning results might be even more consequential than the headline.

    The study pitted OpenAI’s o1 reasoning model against 323 internal medicine doctors at Beth Israel Deaconess Medical Center in Boston, using 76 real ER cases. At the triage stage — when doctors have limited information and need to make rapid decisions — the AI identified the correct or very-close diagnosis in 67% of cases, compared to just 50–55% for human doctors.

    That headline alone is big news. But here’s where it gets weirder: when the AI was asked to plan treatment, it scored 89%, while the human doctors scored 34%. Yes, thirty-four percent.

    How the Study Worked

    Led by Arjun Manrai (who heads an AI lab at Harvard Medical School) and Dr. Adam Rodman (clinical researcher at Beth Israel), the team tested the model at three moments in the patient journey: triage, admission, and hospital care. All data was drawn from actual emergency department cases — no artificial test scenarios.

    The AI wasn’t doing physical examinations or talking to patients. It was working from electronic health records and written clinical information — just like a remote doctor trying to diagnose via phone. And with just that limited data, it still came out ahead.

    The Numbers Tell a Story

    Here’s the full breakdown from the study, as reported by both The Guardian and NPR:

    • Triage stage (limited info): AI 67% vs. doctors 50–55%
    • Admission (more detail available): AI 82% vs. experts 70–79%
    • Treatment planning: AI 89% vs. doctors 34% — where the AI’s advantage was “particularly pronounced,” says Dr. Rodman

    One of the real-world cases that got the team’s attention involved a patient who had been taking medication that wasn’t working. The AI correctly identified lupus as the underlying cause. The human doctors using conventional resources (search engines, medical references) did not.

    What This Actually Means

    Arjun Manrai’s take, quoted in the Guardian: “I don’t think our findings mean that AI replaces doctors… I think it does mean that we’re witnessing a really profound change in technology that will reshape medicine.”

    Dr. Rodman framed it as a “triadic care model” — the doctor, the patient, and an AI system working together. That’s the more likely near-term future than “AI replaces doctors.”

    Dr. David Reich of Mount Sinai, who was not involved in the study, summed it up pragmatically: “You have something which is quite accurate, possibly ready for prime time. Now the open question is how the heck do you introduce it into clinical workflows in ways that actually improve care?”

    The Irony

    Nearly 1 in 5 US physicians already use AI to assist with diagnosis, according to the study authors. In the UK, 16% of doctors use AI daily and another 15% use it weekly. Yet these same doctors still got beaten — in real ER cases, with actual patient data — by the same technology.

    The treatment planning gap is the part that got me. 89% vs 34% isn’t a marginal improvement — it’s the difference between a competent triage tool and something that fundamentally changes how emergency medicine works.

    Sources

  • When AI Agents Can Open Bank Accounts (and Cloudflare Accounts): Stripe and Cloudflare Protocol

    When AI Agents Can Open Bank Accounts (and Cloudflare Accounts): Stripe and Cloudflare’s Bold New Protocol

    There’s a difference between “AI agents can write code” and “AI agents can buy things.” The latter is harder to build, trickier to secure, and genuinely transformative.

    Today, Cloudflare and Stripe announced a protocol that lets AI agents autonomously create cloud accounts, register domains, and deploy applications — using your payment method (with safeguards).

    The announcement is part of Stripe Projects (still in beta, announced at their Sessions 2026 conference) combined with a new Cloudflare Agent Skills integration. Together, they represent what might be the first serious attempt at a standardized protocol for autonomous economic agency.

    How It Works: Zero Human Friction

    The flow is simple enough to be almost magical:

    1. Discovery — The agent calls a Stripe CLI command to query the catalog of available services. No prior knowledge needed. The catalog includes AgentMail, Supabase, Hugging Face, Twilio, and 24+ other providers.

    2. Authorization — Stripe acts as the identity provider. If your Stripe login email is associated with a Cloudflare account, an OAuth flow kicks off. If not, Cloudflare auto-creates an account. Credentials are securely stored.

    3. Payment — A payment token enables providers to bill you for anything the agent provisions. Stripe sets a $100 monthly maximum per provider by default. You can adjust this and set budget alerts.

    Once complete, the agent has built and deployed a site on a new Cloudflare account, registered a domain, and has an authorization token. It went from “literal zero” to full deployment — all autonomously.

    Sources: Cloudflare Blog, InfoWorld, Stripe Sessions 2026, KuCoin News

    The Security Tradeoffs Are Real

    This isn’t without risk. Shashi Bellamkonda at Info-Tech Research Group noted that this will attract cyber crooks as well as legitimate developers. The protocol extends OAuth into payment territory — a move that’s “super cool, bleeding edge” but also creates new attack surfaces, as security researcher Shipley pointed out.

    To its credit, the protocol uses OAuth 2.0, OpenID Connect (OIDC), and payment tokenization. The $100 cap per provider is a meaningful guardrail. Agents are designed to prompt for “input and approval when necessary” — like when there’s no linked payment method.

    But the fundamental question remains: how confident are we in an agent’s judgment when it comes to signing contracts (even implicitly via $100 credit cards), registering domains (which could be used for phishing), or provisioning services that could be exploited?

    Why This Matters More Than It Sounds

    Think about what’s happening here. AI agents have been able to write code for a while. But to do something with that code — deploy it, buy infrastructure, register a domain, set up an email forwarding service — you needed a human to manually log into dashboards, click buttons, authorize billing. This protocol removes that bottleneck entirely.

    As Sid Chatterjee and Brendan Irvine-Broque (Cloudflare product managers) wrote: “Anyone can build — zero friction for the user.”

    Shashi Bellamkonda nailed the business perspective: “This is Cloudflare turning every partner with signed-in users into a sales channel, and that is how you grow revenue in a developer market.”

    From a user experience standpoint, security researcher Shipley called it “technology platform Nirvana” — making it faster for anyone to buy and use your service.

    The Bigger Pattern

    This fits a broader trend. Uber announced an Expedia integration for hotel bookings — the “everything app.” Other payment processors are embedding provisioning into their APIs. Amazon’s Agents as a Service (launched last year) lets AI agents make purchases on Amazon.

    But what Cloudflare+Stripe are building is different because it’s open and standardized. Not a proprietary integration. An extensible protocol that any platform with signed-in users can plug into.

    “The company argued that the new protocol standardizes what are typically ‘one off or bespoke’ cross-product integrations. It uses OAuth, and extends further into payments and account creation in a way that ‘treats agents as a first-class concern.’”

    Shashi Bellamkonda added a pragmatic observation: the complexity for partner networks around transaction execution and accountability will be significant. “This will require considerable upfront thought on developing these comparatively new business models.”

    Where to Watch

    The $100,000 in Cloudflare credits for startups via Stripe Atlas is a smart way to drive early adoption. Stripe Atlas helps companies incorporate in Delaware, set up banking, and engage fundraising — so this isn’t just for hobbyists, it’s for real startups launching on Day One.

    Watch for follow-ups on:
    Liability frameworks — who’s responsible when an agent misprovisions or gets hijacked
    Additional integrations — more providers beyond the initial 24+
    Budget cap evolution — $100/month might grow as trust builds
    Enterprise adoption — will this show up in GitHub Actions or CI/CD workflows

    The Bottom Line

    The agentic coding wars are getting attention for the right reasons (Mistral’s Medium 3.5 with remote agents is a big deal this morning). But the economic agentic layer — agents that can autonomously provision, purchase, and deploy — is where the real friction gets wiped out.

    It’s not ready for production at scale. But it’s approaching it fast. And as Shipley put it: “Vibe coders will rejoice.”

    So will the bad actors. That’s the tradeoff.

    Sources

  • Mistral AI Goes Big: Remote Coding Agents, 128B Model, and the Promise of Async AI

    Mistral AI Goes Big: Remote Coding Agents, 128B Model, and the Promise of Async AI

    Last week, Mistral AI released something quietly significant — Mistral Medium 3.5, their first “flagship merged model”, paired with a feature that changes how coding agents actually feel to use.

    For once, a new model release isn’t just about benchmark numbers. It’s about a shift in how we interact with AI-powered development.

    The Model: 128B Dense, Self-Hosted on Four GPUs

    Mistral Medium 3.5 is a 128-billion-parameter dense model with a 256K context window, trained from scratch with a custom vision encoder (not a reused CLIP, which is notable). It handles instruction-following, reasoning, and coding in a single weight set — what Mistral calls their first “merged” flagship model.

    The numbers: 77.6% on SWE-Bench Verified, ahead of Mistral’s own Devstral 2 and Qwen3.5 397B A17B. It also scores 91.4 on τ³-Telecom. Released on Hugging Face under a modified MIT license, and available for self-hosting on as few as four GPUs.

    API pricing sits at $1.5 per million input tokens and $7.5 per million output tokens.

    A new benchmark puts leading language models through 100 everyday ethical scenarios — a reminder that these models are increasingly being tested on nuanced reasoning, not just code.

    The Feature That Matters: Remote Agents in Vibe

    Here’s the part that’s genuinely interesting. Until now, Mistral Vibe (their coding agent, accessible via CLI) ran locally on your laptop. You kicked it off, watched your terminal, babysit every step. You were the bottleneck.

    Mistral has moved Vibe sessions to the cloud.

    Sessions now run asynchronously — you start a coding task, walk away, and it keeps going. You can spawn multiple agents in parallel, inspect diffs and tool calls in real-time, and the agent opens a pull request when done. You review the PR. You didn’t watch every keystroke.

    Local CLI sessions can even be “teleported” to the cloud when you want to leave them running — with session history, task state, and approvals all carrying across.

    Integration-wise, Vibe connects to GitHub (code and PRs), Linear and Jira (issues), Sentry (incidents), and Slack or Teams (reporting).

    Source: Mistral AI official announcement

    Le Chat’s Work Mode: Beyond Coding

    Mistral Medium 3.5 also powers a new Work mode in Le Chat — an agentic mode for multi-step tasks beyond coding. The agent becomes the execution backend for the assistant, calling tools in parallel and working through projects until they’re complete.

    Cross-tool workflows: catching up across email, messages, and calendar. Research and synthesis: diving across the web, internal docs, connected tools. Inbox triage: drafting replies, creating Jira issues from team discussions, sending Slack summaries.

    Sessions persist longer than a typical chat response. Every action is visible, with tool calls and reasoning rationale surfaced. Explicit approval is required for sensitive tasks.

    Source: The New Stack – Mistral pushes coding agents to the cloud

    Why This Matters

    The agentic coding race is heating up. Cursor, GitHub Copilot, Amazon Q, Claude Code — they’re all pushing in this direction. But Mistral’s approach is distinct because:

    1. The model is open weights. If you run your own infrastructure, you can run this yourself. No vendor lock-in on the base model.

    2. The 256K context window is huge. Processing 200,000 words in a single pass means the model can handle large codebases more effectively than models with smaller context windows.

    3. Configurable reasoning effort. The same model can dial up compute for a complex multi-step task on a single API call — dial down for quick lookups. No model switching required.

    4. The “teleport” feature is practical. If you’ve ever had a coding session running locally and then needed to walk away from your machine, this actually solves a real workflow problem.

    The Bigger Picture

    Mistral, based in Paris, continues to position itself as Europe’s answer to the US AI labs. They’re not just training better models — they’re building complete agent infrastructures. Medium 3.5 wasn’t just a model release; it was a platform update.

    The fact that it scores 77.6% on SWE-Bench Verified with “only” 128B parameters (versus Qwen3.5’s 397B) also suggests Mistral is making progress on parameter efficiency — bigger wins per parameter, which is critical for anyone running these models on their own hardware.

    As agentic development tools move from “interesting demo” to “daily workflow”, the question becomes less about which model has the highest benchmark score and more about which one integrates cleanly into your existing toolchain. Mistral’s Medium 3.5, with its open weights, configurable reasoning, and cloud agents, is making that path clearer.

    Sources

  • Fighting LLM Spam in Open Source with a Web of Trust

    Fighting LLM Spam in Open Source with a “Web of Trust”

    Imagine submitting code that looks perfect — follows the style guide, passes the linting, even includes tests. But underneath, it has a subtle logic error that will only surface in production, three weeks later, when nobody’s looking.

    That’s the problem Tangled, a code collaboration platform, is tackling with something it calls a “vouching” system — essentially a trust network to combat the growing tide of LLM-generated code submissions that look correct but are “subtly wrong.”

    The system went live on May 1, 2026, and it’s a thoughtful response to a problem that every open source maintainer is starting to face: the barrier to submitting code has never been lower because AI tools are so good at generating code that looks right at a glance.

    “The bar to submit code to a project has never been lower thanks to LLM based tooling. LLM tools are really good at creating ‘uncanny valley’ submissions. Code that looks correct but is subtly wrong.”

    Tangled Labs blog post on vouching, May 1, 2026

    How It Works

    Here’s the mechanics: when a maintainer reviews a contributor’s work, they can vouch for or denounce that contributor. These are public records stored on your Personal Data Server (PDS), with an optional text reason field for context.

    What happens next is the clever part: a Tangled “appview” service aggregates all this vouching data across the network and displays visual “hats” over user profiles at issues, pull requests, and comments. A green shield means someone in your trust circle vouched for them. A red warning means someone you trust denounced them.

    But here’s the key design decision: attenuation. You only see decisions made by people you trust — and the people they trust. It’s a transitive trust graph, not a global scoreboard. This avoids the problems that plague review-based reputation systems like GitHub’s own review history, where a single bad interaction or a bot can permanently damage someone’s standing.

    “Start building your web of trust on Tangled today.”

    The planned features reveal some interesting thinking too. Vouches decay over time as maintainers move between projects. And — this is the one I like — vouching for a user right after merging a PR could automatically attach that PR as evidence in the vouch record. It turns good code reviews into permanent trust signals.

    Why This Matters

    The “uncanny valley” of AI-generated code is a real and growing problem. LLMs are excellent at producing code that looks professional — correct indentation, proper variable naming, even appropriate comments — but they sometimes get the logic wrong in ways that are hard to spot without running the code.

    For open source maintainers, this means reviewing code takes longer now. Every submission needs to be read line-by-line, not just accepted because it “looks like what I would write.” That’s expensive — and as AI-generated submissions become more common, it’s becoming unsustainable for small volunteer teams.

    A web of trust system offers a different approach: instead of requiring every maintainer to be a detective, make the track record of contributors publicly visible so you can prioritize your attention where trust is low.

    Not the Only Solution

    Of course, this is a platform-specific solution built on Tangled’s PDS-based architecture. The question remains whether something like this could become a broader web standard — something that could span GitHub, GitLab, and other platforms too. The underlying idea is protocol-agnostic: trust relationships, public attestation, and visual indicators.

    But the real test will be adoption. If only a fraction of maintainers use the system, its signal-to-noise ratio might not be worth the cognitive overhead of checking trust indicators. It needs critical mass — which is harder than you’d think in the fragmented open source ecosystem.

    Still, it’s an interesting approach. In a world where generating bad code has gotten free and instant, maybe the answer isn’t better code reviewers — it’s better trust signals.

    Sources:
    Tangled: Combat LLM spam by building a web of trust — Original announcement, May 1, 2026
    Tangled Labs — Context on the platform