Original Creator Loses to Copycat Site? NanoClaw’s SEO Battle Exposes a Trust Crisis in Search Engines

When an open-source project with over 18,000 GitHub stars searches for its own name only to find a copycat site ranking ahead of the official website—this is not just an absurdity in the tech community, but an exposure of a fundamental flaw in content verification by modern search engines.

An Identity Battle that Began in February

Software engineer Gavriel Cohen launched NanoClaw in February of this year. It is a security-focused open-source AI agent platform designed to be an alternative to the popular project OpenClaw. The project quickly gained traction: it was covered by VentureBeat, interviewed by The Register, and even publicly praised for its architectural design by AI researcher Andrej Karpathy.

However, just as the project took off, a problem quietly emerged. Around February 8, someone registered the domain nanoclaw.net and set up a website that automatically scraped content from the GitHub README. At the time, Cohen hadn’t built an official website yet, as the GitHub repository itself served as the core presentation of the project.

With increasing media coverage, more people began contacting Cohen to report issues with “his” website—but those issues were actually happening on the copycat site.

Why Did Standard SEO Tactics Fail?

Cohen took swift action:

  • He built the real official website at nanoclaw.dev
  • Added links from the GitHub repository
  • Implemented structured data
  • Submitted it to Google Search Console
  • Issued takedown notices to Google, Cloudflare, and the domain registrar
  • Reached out to media outlets to link directly to the official domain

Despite all this, testing as of March 5 showed that the copycat site still ranked #1 on Google for the search “NanoClaw,” while the official site failed to appear even on the first few pages of results.

On X, Cohen pointed out that the copycat site “displays factually incorrect info about the project + fakes its release date.” He further warned that this constitutes an “immediate, active security risk,” as whoever operates nanoclaw.net could swap the page at any time for malicious downloads or a phishing site.

A Systemic Problem Across Search Engines

A discussion thread on Hacker News revealed an even more concerning reality: this isn’t solely a Google problem.

  • DuckDuckGo: The copycat site ranked first, while the official site was completely absent.
  • Kagi: The copycat ranked third.
  • Bing, Brave, Ecosia, Qwant: The copycat site appeared highly ranked across all.
  • Mojeek: The only search engine that ranked the official site higher and excluded the copycat.

This cross-platform consistency suggests that the problem runs much deeper than a single search engine’s algorithm error.

The Dual Failure of Timing and Trust Mechanisms

A key factor may simply be chronological order: the copycat site was seemingly indexed before the official one. This prompts a rethinking of product launch strategies—when is the best time to register a domain?

Cohen followed standard open-source community practice: publish the code first, build a website later. But search engines indexed the imposter first, and retroactive corrections proved to be infinitely harder than any recommended steps suggested.

Google’s John Mueller has previously stated that if scraped content consistently outranks original content, it could indicate site quality issues. Yet, Cohen’s case challenges this logic: his project possessed significant community signals (GitHub stars), media coverage, expert endorsements, and his social profiles/GitHub all point to the correct domain. On the surface, the most visible signals supported the official website.

A Warning for Developers and Businesses

This incident provides several critical takeaways:

  1. Prioritize Domain Strategy: Even if a project is code-first in the beginning, register related domains early to prevent cybersquatting.
  2. Establish an Official Presence: Before gaining traction, set up a basic official website or landing page to establish digital identity.
  3. Proactively Monitor Search Results: Regularly search for your own brand name to spot ranking anomalies early.
  4. Prepare a Response Plan: Understand the infringement reporting processes for various platforms and have technical and legal resources ready.

Unresolved Issues and Future Outlook

As of now, Cohen hasn’t shared whether Google has responded to his takedown requests. In the Hacker News thread, SEO practitioners offered specific advice, including tracing the copycat’s backlinks and contacting media outlets that mistakenly linked to the wrong domain to have them correct it.

The battle is far from over, and Google had not commented at the time of the original reporting. This goes beyond the plight of a single project; it serves as a test of the trust mechanisms spanning the entire search ecosystem. When original creators have to compete against imposters for their own identity, how should we rethink authority and authenticity in the digital age?

This article analyzes publicly available information to explore the systemic challenges of interaction between the tech community and search engines.