Growtika1 day ago
A couple years back John Reilly posted on HN "How I ruined my SEO" and I helped him fix it for free. He wrote about the whole thing here: https://johnnyreilly.com/how-we-fixed-my-seo

Happy to do the same for you if you want.

The quickest win in your case: map all the backlinks the .net site got (happy to pull this for you), then email every publication that linked to it. "Hey, you covered NanoClaw but linked to a fake site, here's the real one." You'd be surprised how many will actually swap the link. That alone could flip things.

Beyond that there's some technical SEO stuff on nanoclaw.dev that would help - structured data, schema, signals for search engines and LLMs. Happy to walk you through it.

update: ok this is getting more traction than I expected so let me give some practical stuff.

1. Google Search Console - did you add and verify nanoclaw.dev there? If not, do it now and submit your sitemap. Basic but critical.

2. I checked the fake site and it actually doesn't have that many backlinks, so the situation is more winnable than it looks.

3. Your GitHub repo has tons of high quality backlinks which is great. Outreach to those places, tell the story. I'm sure a few will add a link to your actual site. That alone makes you way more resilient to fakers going forward. This is only happening because everything is so new. Here's a list with all the backlinks pointing to your repo:

https://docs.google.com/spreadsheets/d/1bBrYsppQuVrktL1lPfNm...

4. Open social profiles for the project - Twitter/X, LinkedIn page if you want. This helps search engines build a knowledge graph around NanoClaw. Then add Organization and sameAs schema markup to nanoclaw.dev connecting all the dots (your site, the GitHub repo, the social profiles). This is how you tell Google "these all belong to the same entity."

5. One more thing - you had a chance to link to nanoclaw.dev from this HN thread but you linked to your tweet instead. Totally get it, but a strong link from a front page HN post with all this traffic and engagement would do real work for your site's authority. If it's not crossing any rule (specific use case here so maybe check with the mods haha) drop a comment here with a link to nanoclaw.dev. I don't think anyone here would mind if it will get you few steps closer towards winning that fake site

adamtaylor_131 day ago
This is very generous of you!

If I was the author, however, I'd still feel like I've been put in a predicament where I need to spend personal agency to fix something that Google has broken.

While that may just be a fact of life, my internal injustice-o-meter would be raging. Like, Google is going to take hours of my life because they, with all their billions of capital, can't figure out the canonically-true website when it's RIGHT THERE in the GitHub repository?

Ugh. I guess that's just the day we live in. But it makes me rage against the machine on the author's behalf.

MerrimanInd1 day ago
I had the exact same thought while reading the above comment, as helpful and generous as it is. Google's entire business model is to help people find things on the internet. They're an insanely well resourced company with all kinds of smart programmers. They have a moral and financial incentive to direct people to canonical sources of information. And STILL it's on this open-source dev to do all the steps outlined just to get the situation corrected?
pocksuppet1 day ago
Google's business model is to help Google's customers pay money to Google. Google Search's customers are mostly scammers who run adverts. Helping the user find a thing is at odds with helping the user find a scam that pays Google money.
nickff1 day ago
This is somewhat true; despite what HNers seem to think, online ads are not very effective (in terms of convincing people to buy things), and Google 'screws over' its advertising customers as often as it delivers deficient search results to users.
allthetime1 day ago
The billions of capital are exactly why they don't care about you. Also, Google didn't break anything. The only person who can claw out a place in this giant machine for yourself is you - all while billions of others attempt to do the same.
sam1r1 day ago
I can’t be the only one blasting killing in the name of in my noise canceling headphones the moment I read your comment..
yieldcrv1 day ago
Author already is spending personal agency

So the feeling is fine, and if he’s going to bother at all, which he is, he should be doing it efficiently. Everything so far was panic and inefficiency

gowld1 day ago
How many Google search results would point to OP's site?

If Google didn't exist, how many Google search results would point to OP's site?

input_sh1 day ago
> This is very generous of you!

No it's not, it's a sales pitch that intentionally ignores some of the things pointed out in the article. The author has invested time into proper SEO optimization, legit websites already link to it et cetera, it's all explained in the article.

From the perspective of a spammer: They need like 2 million MAU to earn below minimum wage. You're never getting those figures by doing something legit and actually useful to a tiny subset of people. You either need a vague site beyond any point of usefulness to anyone or you need a network of knockoff sites. The reason you can't compete with these shitty SEO spam version of your site is because they already have a network of "authoritative" (in Google's eyes) sites and all they have to do is to link from them to a new one to expand their shitty network.

From the perspective of SEO agencies: They can't guarantee results. They can tell you vague, easily-googleable best practices and give you an output of some SEO SaaS that's far too expensive for an individual to purchase. Ahrefs(.com) is the prime example of this, the cheapest paid version costs $129/month. Do you care about SEO that much? No, so you go to these agencies and give them money for them to give you the output of such a tool. But that SaaS also only contains vague and nebulous "things to fix" to follow "best practices" because they also cannot know what drives traffic to your competitor from the outside perspective.

My best suggestion would be to start a website from day one. Doesn't matter how good the website is at first, Google favours sites that exist for longer. If you're creating a website after the knock-off version already exist, you might as well give up immediately, it's gonna be near impossible to recover from that.

adamtaylor_131 day ago
> No it's not, it's a sales pitch that intentionally ignores some of the things pointed out in the article.

Sales pitch or not, someone offering their time to help me with a problem is feels generous to me. To each their own, I suppose.

But again, you reinforce my point in your last sentence. Now anytime I want to make any little toy project (because how can anyone know when their toy project will blow up overnight?) I have to make a full blown website just to ensure I don't get SEO-spammed into oblivion?

My point still stands. Google is the problem and while we likely can't effectively do anything about it, it's frustrating as hell.

input_sh1 day ago
I never said Google isn't the problem, what I said is that going to an agency isn't gonna fix that problem any more than running a SaaS tool yourself will, because they're not Google and they have no insight into what Google made one website prioritised over the other. Because, as you've pointed out, Google is the problem.

> I have to make a full blown website just to ensure I don't get SEO-spammed into oblivion?

No, I said a crappy one on purpose. How good is it doesn't matter, the sooner the Google knows about the domain, the better. Might as well be a copy of your README file using one of the million SSGs GitHub supports that will turn that README file into a website. The only thing that matters is that the website exists and that Google knows about it before the other one.

That's why many people purchase the domain on day 1 before they even start building the thing and also why many have like a dozen domains in their account that is like a boulevard of broken dreams there to remind them once a year they haven't done anything with them.

Still cheaper than a SEO agency or in most cases even one month of ahrefs access.

danny80001 day ago
If Nanoclaw generates some revenue, you should trademark the name and also buy nanoclaw.com. Move the site to the .com domain and then do the steps above. All things being equal, ".com" TLD should get you higher page rank than your existing ".dev". Google is ranking ".net" fake page higher than ".dev". If your page wasn't on .dev TDL it might be second already.
RyanOD1 day ago
Lame to have to do all this pointless busy work just to "win" the SEO battle.
eviks1 day ago
> Google Search Console - did you add and verify nanoclaw.dev there?

Did you read the post before promoting yourself?

> Submitted to Google Search Console probably 15 times.

> map all the backlinks the .net site got (happy to pull this for you), then email every publication that linked to it.

The links are already correct:

> NanoClaw got covered in The Register, VentureBeat, The New Stack, all linking to the real site.

graeme1 day ago
Fantastic advice
jongjong1 day ago
All this work to solve one website's problem... You can be sure MANY other open source projects are facing the same issue. It's just not a viable solution. There is something wrong with Google. Google has to fix it.
vegasbrianc1 day ago
great feedback!
AznHisoka1 day ago
I’m looking at this from a 3rd party of view (definitely not claiming the .net “deserves” to rank higher)

1) the .net version has a couple of very high authority links, namely from theregister and thenewstack (both of which have had lots of engagement).

I highly doubt it would have ranked without those links.

2) its only been a week. Give Google time to understand which pages should rank higher.

3) Google is biased towards sites that cover a topic earlier than others.

I’ve seen pages that are still top 3 for a particular competitive query years later, simply because they were one of the first to write about it.

Suggestions: give it time. Meanwhile I would recommend linking to your website rather than your github everywhere you mention it, to give it a boost

niam1 day ago
If it saves anyone else the effort: I went to doublecheck the claim that those articles cited the wrong page, and it seems you're correct on The Register, but archive.org's earliest copies of the other two articles don't seem to reference the impostor site. They refer instead to the GitHub.

https://web.archive.org/web/20260301133636/https://www.there... https://web.archive.org/web/20260211162657/https://venturebe... https://web.archive.org/web/20260220201539/https://thenewsta...

phkahler1 day ago
>> I’ve seen pages that are still top 3 for a particular competitive query years later, simply because they were one of the first to write about it.

With so many copycats on the internet, first to publish seems like a fairly good indication of the original source. But as we can see here, that's not always true.

Calzifer1 day ago
> 3) Google is biased towards sites that cover a topic earlier than others.

> I’ve seen pages that are still top 3 for a particular competitive query years later, simply because they were one of the first to write about it.

Reason why I still always get the Java 8 docs for any search. Annoying.

wavemode1 day ago
I think the real reason for that is simply that a lot of people are still running Java 8 (so those docs still see a lot of traffic). I remember reading that it's still used by something like 25% of Java developers.
tyingq1 day ago
Most of the problem is the "only been a week" part, likely. Though you're fighting an algorithm that's been patched in inconsistent places for all sorts of weights like "authority" and "quality".

Thousands of little weights driven by obscure attributes of the site that you're not really going to figure out by thrashing and changing stuff.

graemep1 day ago
I think the precaution developers should take is having a website and adding a page to it for each project.

If you must just have a repo self host it. In fact, selfhost the repo in any case.

uyzstvqs1 day ago
I did some experimenting using different search engines and AIs. Here's the results:

Google and Brave linked to the official GitHub repo followed by the fake domain. DuckDuckGo and Bing linked to the fake domain first, followed by the official GitHub. Mojeek gave higher ranking to two third party articles, but linked to both the official GitHub and website without fakes. Qwant was the worst, as the official website was the second result amongst multiple fake websites and an unrelated GitHub repo.

Then there the AIs. ChatGPT, Google AI mode, Gemini, Grok, Perplexity, and Brave Search "Ask" all linked to the official website, and some added the GitHub repo as well. DuckDuckGo Search Assist linked to just the official GitHub. Google AI mode, Gemini and Grok also explicitly warned about the fake websites. Copilot got the official website and GitHub right, but linked to a presumably fake X account as well.

Conclusion: Google, Brave and Mojeek win in search. AI is very good and clearly beats search overall. Google AI mode, Gemini and Grok stand out in quality.

spyder1 day ago
For you... But the results are different for different users.

For me Google shows the .net site first the github one as second.

Asking chatgpt 5.2 (Auto mode) to search for the nanoclaw site, it says the same, first links the .net site and shows the github as an optional page. When I try to give it a hint by asking "are you sure?" it still even hallucinates that it's linked from the github:

"Yes — nanoclaw.net is the official documentation/site for the NanoClaw project, in the sense that it’s the project’s published homepage and is directly linked from its canonical open-source repository. It describes the project, features, installation steps, and links to the source code on GitHub, which is the authoritative source for the project’s codebase."

Chatgpt 5.2 (Thinking mode) and Claude gets it right the first try, they asnwer with the official .dev page first and claude shows the .net second as "another site covering the project".

andai18 hours ago
I was surprised by what you said, so I used a browser that's not logged in to a Google account, to compare. Indeed the fake site ranks #1! Dang!

I guess Google has my account in an autism bucket, so biases GitHub links higher ;)

1kurac1 day ago
I tried AltPower Search and it exhibits the same issue as Google. I think you might just need to give it more time to index. Nanoclaw.dev has only been available for a week. Then, it's the lower relative reputation of the 'dev' vs. the 'net' domain ...

[1]: https://altpower.app [2]: https://web.archive.org/web/20260000000000*/https://nanoclaw... [3]: https://radar.cloudflare.com/tlds

sghitbyabazooka1 day ago
this thing is just google with a theme
Marsymars1 day ago
How did you prompt the AIs?
ariehkovler1 day ago
It's worse than that. There's a SECOND imitator that I actually stumbled on today while looking something up about nanoclaw - nanoclawS [dot] io - and that one's harvesting email addresses.

The obvious risk here is a bait and switch, where one of these sites switches their link to the Github repo to point to a malicious imitator repo instead.

One approach would be to go after the sites themselves, not their Google ranking. See if their hosts are willing to take them down. Is there anything you can assert copyright over to hang a DCMA request on? That's hard for an Open Source project, I guess. And the fake sites aren't (yet) doing any actual scamming.

Good luck, though!

yorwba1 day ago
The article says "Filed takedown notices with Google, Cloudflare, and the domain registrar spaceship.com"
ariehkovler1 day ago
Yeah but you do need to hang the takedown on some technical reason like copyright or scamming. The issue here is there's no obvious victim. Makes a takedown harder.
mx7zysuj4xew1 day ago
Since the clone site isn't doing anything obviously malicious like spreading malware or blatantly illegal content none of those parties will take any action whatsoever, nor should they.
jacquesm1 day ago
It isn't doing that now, but you can't be sure about what they're going to be up to a little ways down the line, the fact that they are clearly trying to misdirect the traffic is proof positive they're up to no good.

Just do a bit of risk assessment if something like this were to be shipped to people that have come to blindly trust the source and you'll see why letting this slip is a very bad idea.

pocksuppet1 day ago
Most registrars and hosts consider phishing already malicious, even if there's no obvious malware download or anything.
luckylion1 day ago
"Phishing" has a _very_ different meaning from "offer the option to sign up for a newsletter", let's not conflate the two.
andai18 hours ago
Well, pretending to be an unrelated 3rd party for the purposes of harvesting people's personal information, which can then be used to send them emails, which they will think are from that unrelated 3rd party...
pocksuppet14 hours ago
The meaning registrars and hosts use is "looks like someone else's website"
luckylion13 hours ago
Could be, but this doesn't. It has the same name and is _about_ the same thing, but it doesn't look like the other site.

Just because you have pocksuppet.org and I hack pocksuppet.net doesn't mean that one of us is phishing.

james_marks1 day ago
*yet

Build the audience first, attack comes later

markus_zhang1 day ago
My advice to all OSS developers: if you open source your project, expect it to be abused in all possible ways. Don't open source if you have anxiety over it. It is how the world works, whether we like it or not.

I appreciate that you open source your projects for us to study. But TBH, please help yourself first.

pocksuppet1 day ago
In particular, if you license it MIT, and it's useful, expect Amazon to make a fork, not give you the source code, and each tens of millions of dollars from it while you don't get a cent.

There's writing code for charity, and then there's this. Charity wasn't meant to include hyper-corporations.

nananana91 day ago
If you want evil megacorps to give you money when they use your thing, maybe say "if you're an evil megacorp you have to give me money when you use my thing" in the license?

If your license reads "hey, you can use this however you want, no matter who you are, and don't have to give me money", people will use it however they want, no matter who they are, and won't give you money.

Unfortunately, for decades, free software fanatics have bullied inexperienced and eager programmers, who don't know any better into believing that an actual sustainable development model that respects their work is evil and that we should all work for free and beg for donations.

pocksuppet14 hours ago
Exactly. (A)GPL tried to balance this, in ways that still partially work. MIT software just throws its hands up and donates itself to evil megacorps in anger. If you believe in charity for ordinary people but not for evil megacorps, you'll put something in the license that evil megacorps don't like, or that forces them to work for the benefit of everyone by releasing their work that builds on yours.

You can't write "Amazon may not use this" and still be free as in freedom, but terms that force sharing seem to work.

gorjusborg1 day ago
> free software fanatics have bullied and eager programmers

We must travel in different circles. I've been around a while, and I've never seen _any individual_ bullied for keeping their code closed source.

That said, I have an extreme bias toward only using open source code, for practical reasons, and I'm open about that.

eviks1 day ago
> I have an extreme bias toward only using open source code

If you have eyes closed how would you notice?

sfRattan1 day ago
> Unfortunately, for decades, free software fanatics have bullied inexperienced and eager programmers, who don't know any better into believing that an actual sustainable development model that respects their work is evil and that we should all work for free and beg for donations.

Silicon Valley hype monsters have done this, sure. And so have too many open source software advocates. But all the free software advocates I've read and listened to over the years have criticized MIT- and BSD-style permissive licenses for permitting exactly the freeloading you describe.

markus_zhang1 day ago
What if they simply use the code and don't give you the $$$? Are you going to sue them?
shevy-java1 day ago
I agree that MIT may not be the best licence here in such a use case scenario. The question is why corporations think they can be leeches though - and the bigger, the more of a leech they are on the ecosystem. That's just not right.
buran771 day ago
> The question is why corporations think they can be leeches though

Because they can, they don't just think they do. Everything about the framework they operate in allows or even encourages them to do it.

> That's just not right.

As a matter of morality, you're right. This is something very few people or corporations concern themselves with just as soon as there's real money to be made by not concerning themselves with this.

duskdozer2 hours ago
Because the copyright owners give them explicit approval to leech by using those licenses.
graemep1 day ago
> The question is why corporations think they can be leeches though

because they can be. They do not think they can be leeches, they know they can be leeches.

> That's just not right

I somewhat agree with you, but they do actually have permission to do it.

jonathanstrange1 day ago
IMHO, this is the wrong way of looking at it. You can choose any license you like. Choose the right license, and that should be the end of the discussion.
vablings1 day ago
The idea that software that is free NEEDS to be open source because "I don't want something running on my computer" but then will go and download the precompiled binary hurts my head alot
RcouF1uZ4gsC1 day ago
With the cloud, GPL won’t protect you either
pocksuppet14 hours ago
AGPL partially works. Can you think of any better terms? SSPL was a flop.
dspillett1 day ago
AGPLv3 largely does, if you can and do enforce it in some way when breaches happen.
atls1 day ago
AGPLv3 attempts to solve this problem, by forcing SaaS providers to open-source their modifications.

https://www.gnu.org/licenses/agpl-3.0.en.html

j1elo1 day ago
Depends on the needs of the licensor. AGPLv3 solves the problem of other players taking the code, improving it privately, and not sharing those improvements. But AGPLv3 is not a silver bullet for people who write Open Source code and pretend to make a living from it. "Open Source is not a business plan".

https://news.ycombinator.com/item?id=45095581

Andrex1 day ago
Maybe Stallman had something of a point...
RcouF1uZ4gsC1 day ago
Nope. Stallman helped create this mess.

Free software underpins all the infrastructure of surveillance capitalism.

Andrex1 day ago
It underpins all software, and has wormed its way into Windows. I'm not sure this is as good a point as you think.
ekjhgkejhgk1 day ago
Stallman is always right, and HN always downvotes it.
0_____01 day ago
He's a terrible communicator, and sort of repellent in person. Contrast someone like Cory Doctorow who manages to be right about stuff and actually communicate effectively.
shevy-java1 day ago
I don't really share that point. If the message is correct, why would the other things matter? Due to "social norms"? It is a similar problem with Code of Conducts. In general I don't care about CoCs. That does not mean I act in the opposite manner either - I just don't feel the need for CoCs.
_aavaa_1 day ago
> why would the other things matter

Because on the other end of the argument is an audience of human beings, not a theorem solver. Pretending that delivery does NOT matter, or even shouldn't matter, is out of touch with reality.

andrew_lettuce1 day ago
because brilliant jerk is not acceptable
littlestymaar1 day ago
Publicly defending pedophilia arguably isn't “right”, but if you restrict Stallman's positions to software licensing, then I'd agree with you.
ux2664781 day ago
The only instance in which he's ever engaged in "publicly defending pedophilia" was in remarks he made 20 years ago about the innocuity of "voluntary" sex with minors. He has since retracted those statements and publicly espoused a different and more informed opinion. There's certainly a large amount of very low-quality journalism engaging in bad-faith interpretations of things he's said in other contexts, though these aren't serious characterizations, only hallucinations manufactured by professional scheisters to fulfill unspoken agendas. At this point dredging it up and holding it against him in-perpetuity is a bit wrongheaded.
graemep1 day ago
Of course restrict it to his opinions on software licensing. I think that is the sort of thing people mean when they say he was right.

Lots of people made similar claims. Most notably The National Council for Civil Liberties (now called Liberty), the UK's leading civil/human rights organisation made submissions to parliament claiming that sex with minors was not always harmful, had a pro-paedo organisation as an affiliate and give them a representative on the gay rights subcommittee: https://www.thetimes.com/travel/destinations/uk-travel/scotl... The people involved were unaffected, some reaching fairly high political permissions.

A lot of other people whose works are respected have actually had sex with minors. Eric Gill and Oscar Wilde for example.

None of that makes Stallman's opinions defensible in my opinion. On the other hand I am happy to ignore his opinions on that topic and still value his opinions on other things.

ux2664781 day ago
The entire point is of my post is that it's no longer his opinion.

> Through personal conversations in recent years, I've learned to understand how sex with a child can harm per psychologically. This changed my mind about the matter: I think adults should not do that.

https://stallman.org/archives/2019-sep-dec.html#14_September...

graemep20 hours ago
Both your point and my point are true.

Obviously I am glad he has abandoned his opinions.

I do think it is terrible that the politicians, activists, teachers etc. who held such opinions in the past did not suffer severe career consequences even if they subsequently changed their opinions. I think they cannot be trusted in those areas. However, Stallman is not in such an area.

peaseagee1 day ago
Tell that to my spouse who, at age 14, was given his contact card by him directly.
corndoge1 day ago
Wow, I'd be thrilled if I met stallman and got his contact card at age 14!
dTal1 day ago
I'm not following - are you implying that handing a contact card to someone is a sexual pass? Or is it only considered sexual when the recipient is underage?
ekjhgkejhgk1 day ago
I wish at 14 I had people of such integrity around me.
tredre31 day ago
He was wrong about refusing to make gcc more modular by fear that it would be used to insert proprietary plugins, which is why llvm is behind every new language or dev tool now and gcc is only relevant because the kernel still depends on it (for now).

His opinions on software have been largely out of touch for the past 20 years. People might yearn for his ideals, but it's just not the world we live in.

littlestymaar1 day ago
> His opinions on software have been largely out of touch for the past 20 years

I said “software licensing”, you're talking about “software”.

ekjhgkejhgk1 day ago
I keep hearing this.

Please quote Stallman's quote where he defends pedophilia.

Not a quote of someone else saying that Stallman defends pedofilhia, but a quote by Stallman himself.

frizlab1 day ago
And whatever license you use, expect it to be crawled by AI, and have AI provider make millions on it.
smegger0011 day ago
> if you license it MIT, and it's useful, expect Amazon to make a fork, not give you the source code,

thats why the gpl family of license exist.

MIT/BSD family licenses are do whatever you want with this,

if you want to make money off of you pet opensource project I recommend multi-license it with a copyleft with copyright assignment required for contributions and offer other licenses with a fee.

mkehrt1 day ago
I don't understand your point? If you write code with an MIT license, this is what you would expect.
pocksuppet14 hours ago
People are conditioned not to think about it.
shevy-java1 day ago
Totally agreed.

I find it strange that people use the MIT licence and then complain "big greedy corporation did not contribute back anything". Though I also agree that this leeching approach by corporations is a problem to the ecosystem. MIT just is not the right licence to fight that.

gowld1 day ago
So? I am not about to create AWS. I'm glad people can use my free software on their own machines, on rented servers, or hosted by an expert.
alpaca1281 day ago
AWS can profit more from it than smaller organizations or individuals, making it even more untouchable by potential competition.

A market with little competition costs you too in the long term.

Ma8ee1 day ago
Are you still glad when AWS starts selling you software as a service and make hundreds of millions and you get nothing?
pfrrp1 day ago
There is even a software "law" related to this: https://www.hyrumslaw.com/

" With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviors of your system will be depended on by somebody. "

Vegenoid1 day ago
I pay for Kagi to get better search results. Lately, I’ve felt that Kagi’s search has been just as full of low-information and AI generated results as Google. I’ve been wondering why I’m still paying for it. This seemed like a good litmus test. Unfortunately, Kagi displays pretty much the same results as Google for nanoclaw.
duskdozer1 hour ago
Isn't Kagi basically just using a blocklist? In which case it's whack a mole as new sites spring up or bubble up to the top of other results. I keep my own blocklist and intermittently search key phrases to blanket block new sites, and there's often new sites popping up.
Vegenoid1 hour ago
This sounds very interesting, could you elaborate on your methods and tools?
soiltype1 day ago
Yeah that's increasingly been my feeling as well. I have to keep prefacing my Kagi recommendations with, "web search is less and less useful every year, but..."

I still appreciate being able to customize rankings, bangs, and redirects. But with how utterly shit the web is overall, any web search is basically only good if you know the site(s) the answer(s) will be on. When you're searching for something novel-to-you, even Kagi is just going to show you a full page of unregulated slop on the dumbest, just-registered-this-year domains. Real information is increasingly limited to small islands of trust.

duxup1 day ago
I don’t like any search engines now :(
CSSer1 day ago
It's because the search engine is being eaten by the LLM. I'm not suggesting that it's a perfect substitute. It's just what I feel is happening.
TSiege1 day ago
more like LLM garbage are rotting search engines from the inside out
duxup1 day ago
Naw this is a pre LLM problem.
bigiain1 day ago
Yep. SEO spam has been a thing for decades.

LLMs have supercharged it though, it's so much easier to create dozens or hundreds or thousands of ultra low effort LLM written webpages and websites that it ever was before LLMs.

CSSer1 day ago
I'm not talking about LLMs diluting search. I'm saying users are using LLMs to search more than search itself, including in search engines.
frereubu1 day ago
I hadn't really noticed anything like this until you pointed it out. My main use for Kagi is to pin Wikipedia results... I just tried searching for "nanoclaw" on Kagi (I'm in the UK so results biased towards there) and got:

1. nanoclaw[dot]net (!)

2. github.com/qwibitai/nanoclaw which looks like a ripoff?

3. Three videos, at least one of which looks like slop with crypto ads

4. github.com/gavrielc/nanoclaw which I presume is the real repo judging from the name?

5. Three "interesting finds" the top one of which is nanoclaw.dev, but with the title "Don't trust AI agents" because it's a blog post from that site

6. A fork of the qwibitai/nanoclaw repo

bigiain1 day ago
> 2. github.com/qwibitai/nanoclaw which looks like a ripoff?

That is literally the GitHub repo the original article shows as being "real".

bob10291 day ago
Losing the SEO battle is a lot like losing money on the stock market. The system you are fighting is incredibly efficient and will never in a trillion years give a single shit about your specific concerns. You can hire lawyers and spend time complaining about it all day on social media. But you'll rarely get a drop of blood out of this stone. The best you can do is to step back, reevaluate your understanding of the market, and adjust your strategy.
allthetime1 day ago
Piggybacking on the Claw hype, surprised when someone piggybacks on you...
stusmall1 day ago
Especially when the original claw had to change its name because it was piggybacking on another products hype...
ajross1 day ago
That was exactly my first thought. The better framing here isn't "honest site victimized by Google linking to their IP-thieving scammer clone", it's "dude lost in an arms race of eyeball chasing and is salty about it".
GeoAtreides1 day ago
And I'm losing the sanity battle for my own mind with all these AI generated posts pls I beg you two lines by your hand are worth 100000 generated tokens
MarkSweep1 day ago
The link on GitHub to the real site is marked with rel="nofollow". I wonder if it would make sense for GitHub to remove nofollow in some circumstances. Perhaps based on some sort of reputation system or if the site links back to the repo with a <link rel="self" href="..." /> in the header? Presumably that would help the real site rank higher when the repo ranks highly.
geocar1 day ago
I don't see any reason that GitHub should use rel="nofollow"

Github only has authority because people put their shit there; if people want to point that back at the "right" website, Github should be helping facilitate that, instead of trying to help Google make their dogshit search index any better.

I mean, seriously, doesn't Bing own Github anyway?

pocksuppet1 day ago
Perverse incentives strike again! Websites that allow links in user-generated content are spammed with user-generated spam links to improve SEO of spam sites, which hurts the site's own reputation because most of the links on it are spam. To avoid this, all sites use nofollow.
geocar18 hours ago
As this example shows, by all sites using nofollow, Github is improving the SEO of spam sites.

What the fuck are you talking about?

pocksuppet14 hours ago
GitHub doesn't care if spam sites have SEO, as long as GitHub isn't being penalized for linking to them.
geocar3 hours ago
Why exactly do you think should GitHub be penalized?

Talk about perverse.

Sweepi1 day ago
> When you Google "NanoClaw," a fake website ranks #2 globally, right below the project's GitHub.

Unfortunately, the fake website [.net] is also #3 on Kagi, and #1 on Duckduckgo. On Kagi, the Github is #1 and nanoclaw.dev is #4, but only if you count "Interesting Finds". On Duckduckgo, the Github is #2 and nanoclaw.dev is nowhere to be found.

tracker11 day ago
Do what Louis Rossman did... just ask Google's AI what you need to change on your site... Apparently that's the secret now.
signorovitch1 day ago
> This isn't an SEO problem. This is a Google problem.

I've tested on a few of the big search engines, and nanoclaw.dev is never in the first page.

Gemini was also unable to find the .dev, even in "Research Mode." The only way I was able to get a direct link to nanoclaw.dev was with chatgpt, which found it by scraping the GitHub (it also spat out links to a couple of other copies it found from google.)

Seems this is a wider SEO issue, one which infiltrates even the technology supposed to replace it.

pbmonster1 day ago
> Gemini was also unable to find the .dev, even in "Research Mode."

Unsurprisingly, right? Gemini just uses the same back end as Google itself, which - according to OP - doesn't list his site on page 1, not page 2 and not page 5.

Depending on the prompt, it should have gotten the link from the github, but that's like an indirect hint from a secondary source, it probably ranks the Google index quite highly when it does research.

networkcat1 day ago
Before installing new software, I usually visit its GitHub page or Wikipedia entry first and click through to the official site from there. I just don't trust the 'official' sites that pop up in Google search results. How many of you do the same?
fritzo1 day ago
Don't forget the SourceForge rug pull, when the once definitive central source of truth was bought out and became a venue for malware
eviks1 day ago
Why not use your package manager as a first step instead?
mareko1 day ago
@Gavriel if you're here, have you looked into filing a trademark for NanoClaw? Once you have a registered mark (or even a pending application), you get much stronger leverage with domain registrars, Cloudflare, and Google for takedowns.

UDRP disputes become straightforward when you can show the other party registered a domain using your mark in bad faith. It won't fix the Google ranking overnight, but it gives you real legal teeth beyond just SEO whack-a-mole.

raylad7 hours ago
So the real site is https://nanoclaw.dev

(putting this here for the search engines to see)

throwaway858251 day ago
People forget that Google is a malware services company. A significant part of their revenue is fake OBS malware and the like.
youknownothing1 day ago
> I've done everything you're supposed to do and more.

By the sound of it, everything except reporting it? Winning SEO just means appear before them in search results, but the fake page shouldn't just lose the race, it should be taken down.

ICANN specifies how to deal with this kind of issue: https://www.icann.org/en/system/files/files/submitting-dns-a...

shadowgovt1 day ago
Comparing the two sites side-by-side (nanoclaw.net, the fake, and nanoclaw.dev, the correct one), there's also the issue that nanoclaw.net is doing a better job of looking like a correct website.

The fake site:

- includes a copyright statement

- includes a bottom sitemap

- includes an "author" meta-tag

- includes a sameAs to discord "nanoclaw", where the real site references some random string discord server

- has a .net instead of a .dev

Given all that plus the PageRank feedback loop of the .net having been up longer and enough people having found what they're looking for from it to not trigger Google's low-quality signals, author is fighting an uphill battle here; the squatters know what they're doing.

samuelknight1 day ago
Copycats are not a new problem. You can be completely open source and have a trademark on the project name.
roywiggins1 day ago
It might be mitigated a bit by having a website that doesn't look like AI slop, just to differentiate it from the duplicates which are also AI slop.
azangru1 day ago
> So I built a real website. That was two weeks ago.

Is Google supposed to have drastic updates to its index over 2 weeks?

stavros1 day ago
The whole project is a month old, and two weeks were more than enough for Google to rank the fake site first, so yes?
shadowgovt1 day ago
There is significant first-mover advantage in the index, especially when the public is finding the initial result to be good enough to satisfy their questions.

Google doesn't care more about authoritative answers than the public does; the public is one of Google's signals for good-quality results.

bubblewand1 day ago
Back when they were good at being a web search, yes.
eviks1 day ago
Yes, computers are pretty fast. But also don't ignore history, the website shouldn't have been ranked higher than the github repo in the first place
carlosjobim1 day ago
It usually takes one or two days for them to start ranking new pages. They're fast!
AznHisoka1 day ago
Not these days in my experience. Maybe 5-10 years ago. I imagine Google is so indundated with so much spam, and AI slop they are being more discrimantory on what to crawl and index
philipwhiuk1 day ago
Uh? Yes?
czhu121 day ago
I've been developing and maintaining https://canine.sh and https://hellocsv.github.io/HelloCSV/ for some time now, and its really odd what pops up when you google these.

Neither of these projects anything requiring payment anywhere, but tons of sites pop up trying to "sell" these projects. I wouldn't even know what that means and I'm kind of tempted to drop in a credit card to see what happens. Would they auto send you a link to the public repo?

Most of it is quite lazy and haven't quite kept up with modern AI capabilities. They mostly just scrape the text I wrote, and present it with some screenshots that I created. I can imagine a future where

- really nice landing pages are generated

- the product is entirely rebranded

- marketing is automated (linkedin, google ads, etc)

and someone can develop some autonomous system that basically finds high quality, yet unknown open source projects, and redeploys it and sells it online for actual money.

socketcluster1 day ago
Google buried my popular open source project deep in its search results. It's a very niche technical field with niche keywords but it shows all the scams and paid services on the front page and my project is not even in the first 5 pages when I type exact keywords that are present in the page. It only shows my project when I type its exact name.

It feels like Google is actively discriminating against my project in its algorithms. At this point, I wouldn't be surprised if there is some code in the google algorithm which is something like if (untrustedDomain(domain)) score -= x and they probably have some highly paid 'engineer' maintaining this blacklist. It definitely feels like this.

blackoil1 day ago
Theory is that Google pushes up pages which have spent on Google Ads.
lucasluitjes1 day ago
I've been annoyed with Google search quality lately and was wondering how the others fared on this specific issue. Turns out, mostly not much better.

Bing, DuckDuckGo, Qwant, Ecosia, Brave all had the github repo and nanoclaw.net (the fake homepage) in the first or second place. Marginalia had fascinating results about biology but only tangentially related Nanoclaw results, not the github repo or either the fake or real homepage.

Mojeek was the exception, sort of. It had some random news sites up top, but the github repo in 2nd place and nanoclaw.dev (the real homepage) in the 4th place. The fake nanoclaw.net did not show.

Kagi is the only one I couldn't try because apparently I used up my free credits a year back. Can anyone see how they compare?

troymc1 day ago
For me in Canada today, Kagi is showing nanoclaw.wrongtld as the third text link, after two different GitHub repos (why two? I didn't have time to sort that out). I clicked the thing to block the link to the site with the wrong TLD; hopefully other Kagi subscribers will do the same.
vogu661 day ago
My default is ecosia and below sponsored links there is only the github and pages talking about the thing, no official or unofficial page. I guess that's better?

It gives two sponsored links to openclaw things, so no fake either (presumably, I don't know what they are).

WD-421 day ago
Is there an acronym for “AI generated, didn’t read”?
jccooper1 day ago
I don't see that Google cares much about backlinks any more. Seems like it's all about "content" keywords and maybe a little time-on-site. The domain is a huge signal, which is probably where the problem comes from here.

Sadly, Google's generally better against all the new AI-generated content farms than other players, so maybe they're still running PageRank somewhere.

bubblewand1 day ago
Yeah, Google stopped even trying to usefully index most of the web around ‘08 or ‘09 or so. Was super obvious when it happened and it’s been that way ever since. Your GitHub is up there because it’s a blessed website, your personal site isn’t and will struggle mightily to rank even when you search exact, unusual phrases on it, if it’s like most of the rest of the Web on Google these days.

Get more traffic (make sure google analytics sees it, IDK but that probably matters because monopoly) and it might help.

Most of the other indices aren’t much better. Turns out fighting spam is expensive, easier to just do a combo of boosting really big sites and blessed spammers that use your ad network.

huijzer1 day ago
> Turns out fighting spam is expensive, easier to just do a combo of boosting really big sites and blessed spammers that use your ad network.

Plus based on the results it’s not entirely clear that only the ad part are ads. Especially around certain topics where money is involved, the Google first page is often showing companies that could profit from traffic

bubblewand1 day ago
Well, right, a separate problem is that some notable amount of Google's revenue comes from fooling people into thinking that ads are "natural" search results. To include an extortion racket where you have to pay for ad placement for your own exact company and product names so competitors don't get ads-masquerading-as-results placed above you. Plus this is a super-helpful feature to scammers, like it's basically scam enablement trust-laundering as a service. If we had a functioning government and market guardrails the FTC would have been all over them for this many years ago, besides which they'd long ago have been broken up into several separate companies and denied a bunch of the acquisitions they've performed.
tracker11 day ago
I would suggest just using Github Pages for the "official" site, for similar reasons... unless you really need interactive parts that require client-server... in which case you can maybe split between pages and your own domain. Just a thought.
sonofhans1 day ago
This is how they get you, literally. “Too bad we’ve poisoned the public water source. How about if you buy water from us?”
LtWorf1 day ago
I moved my projects on codeberg and the first results in still the locked github project with the link to the new one.
vegasbrianc1 day ago
SEO is broken at the moment. With Google Overviews just killing organic SEO, it is becoming less and less relevant, unfortunately.
jasonvorhe20 hours ago
It's shocking how many developers ans tinkerers still rely on Google when there's Brave, Kagi and others out there.
theanonymousone1 day ago
I saw this some time ago with Bing and OpenCode:

"If I search for "opencode GitHub" in Bing, a random fork is returned"

https://news.ycombinator.com/item?id=46573286

elevation1 day ago
This project was launched very quickly, and may have not had a large budget for extra domains.

But for entities with a bit more time, you can prevent this scenario by taking acquiring the .com/.net variant domains before launching.

shubhamintech1 day ago
lol This gets worse with AI search. If Google can't figure out canonical source from a GitHub repo linking directly to the official site, LLMs definitely can't. And once an AI overview bakes the fake site into its knowledge graph, you're not just losing Google rankings imo, you're losing the models too. Registering every TLD on day 1 is now just table stakes for any OSS project which still doesn't seem fair.
roywiggins1 day ago
I'll be honest, I'd take this more seriously if this post didn't read like ChatGPT output. If you won't spend the effort to use your own words why should I stir myself to care?

Sorry, I'll put it in hand-crafted ChatGPTese:

## The Slop Problem

Every post sounds the same. No intelligence. No individuality. Just pure, clean LLM slop. Let's dive in.

- Every post has LLM tells. This is key.

- Posts get upvoted anyway. Nobody seems to notice or indeed care.

- People acclimate to the slop. This isn't just a coincidence. This is a real shift in standards. When people read enough of this, they begin to think it sounds normal.

## The Replying Dilemma

Should you engage with the content, when there is a real person involved? On the one hand, they put their name on it, and probably the details are drawn from their prompt, so it can be said to fairly represent what they wanted to say. So maybe ragging on their ChatGPT prose is being mean. On the other hand, if nobody ever mentions this, the acclimatization will only get worse as the rising tide of slop overwhelms any other style of writing.

## The "Snobbery is good actually" Option

Relentlessly bully people for their half-baked LLM copy. Make it your whole personality. Go insane.

## The "Giving Up" Solution

Learn to stop worrying and love the LLM.

mbac327681 day ago
A year ago I would have agreed but lately, when it comes to stuff linked off of HN, it's actually more likely to be clear and readable if it's AI written.
dragonwriter1 day ago
Is it more likely to be clear and reliable if it is AI-written, or are features associated (both directly and by correlation) with clear writing increasingly misperceived as “AI tells” because they are also favored in LLM training?
roywiggins1 day ago
I don't find the LLM written stuff very readable because after one too many "real"s or "The X Dilemma" my brain shuts off. It's not even voluntary, it just does that on its own.
bakugo1 day ago
The post is AI generated, the project is AI generated, the "real" website is AI generated, the "fake" website is AI generated.

It's slop all the way down.

roywiggins1 day ago
I'll be honest I really did have slightly higher hopes for computer-touchers when it comes to retaining cognitive authority over machines.

Instead it seems like there's a solid core of people who have always wanted to outsource their brains entirely to machines, and have finally got their wish.

I'm old enough to remember when we joked about normies who were dumb enough to let computers think for them.

inkysigma1 day ago
Just an FYI, but I don't know if being in the website field of GitHub really helps since there's a rel nofollow on the link.
bakugo1 day ago
> I don't want to be playing this game. I want to be writing code

I assume the "I" here refers to Claude, who seemingly wrote the entire project AND the linked post.

alexpham141 day ago
Oof, this is exactly the nightmare scenario for “repo-first” OSS.

The weird bit isn’t that a scraper site exists, it’s that Google can’t do the obvious graph join: query == project name, #1 result is the repo, repo declares Homepage = X, yet Google still boosts an imposter domain. That’s not “SEO”, that’s the ranking system refusing to treat maintainer-declared canonical as a strong signal. Early domain squatters get to “set the default” purely by being first, then they can flip the content later once trust is baked in.

People keep saying “tell users to bookmark the real URL” like that scales. Most people will click the second link and assume it’s official. If Google can’t solve this class of problem, their “AI answers” are going to be a bigger mess than blue links ever were.

ryandrake1 day ago
> I don't want to be playing this game. I want to be writing code, building community, pushing features, fixing bugs.

Then just write code, build features, and fix bugs. Nobody is forcing you to fix search engines' problems. If you're not making money off of traffic, then why worry so much about SEO? Just do your thing. If it really bothers you, put a little note on your GitHub warning people about the fake site, and get on with your life.

jrjeksjd8d1 day ago
You think somebody who wrote "nanoclaw" really doesn't care about getting industry famous and improving their career prospects?
shadowgovt1 day ago
That information comes from the GitHub commit history, not the existence / nonexistence / relative popularity of a website. If that's the goal, the imitating website is only helping the career prospects so long as it doesn't do anything shady on pass-through.
senko1 day ago
> This isn't an SEO problem. This is a Google problem.

Sorry, but this is a SEO problem. The fake site has probably been linked to by a number of high-SEO outlets. What you should do is contact them and tell them to fix the links (to point to your site), which they should be happy to do.

jermaustin11 day ago
I'm not sure how relevant this is anymore, but when I worked in SEO/Rep Management, when a website was dinged either by google or by hackers, we would usually spin up a new website as an umbrella website for the brand, fix their old site, and create a few smaller websites for the brand in specific niches (like if the brand was a bookseller, we'd have local websites, genre websites, etc.), link to the new websites by the umbrella site, then do a link analysis of the old site, and any news media with high authority, we'd have them update their links to point to the new umbrella website.

It was 100% a game of whack-a-mole. And while we were a reputation raiser, we were always combatting against reputation tarnishers. Car dealerships already have a bad reputation to begin with, but they hate eachother more than their customers hate them. They were our bread and butter. Same with tradespeople (plumbing, electrical, hvac, handy(wo)men).

Hizonner1 day ago
If SEO works, that's a Google problem.
thepasch1 day ago
> Sorry, but this is a SEO problem.

Google linking to a fake website directly underneath the real project's repository that has a real link to the real website isn't a SEO problem, lol.

beardyw1 day ago
If it doesn't work it's not SEO.
ZoomZoomZoom1 day ago
This is a google problem, but only secondary.

The crux of the matter is that there's nothing that protects an open project besides reputation, and nowadays in the digital space it can be cheaply farmed.

Laws could help, but they only work when you undertake purposeful actions to be covered by them, like register a trademark, and it's never cheap.

Imagine you're in a local band playing shows. It's 3 month old and you have no issued records. A second band tighter with venues takes your name and starts performing under your moniker. You have no money to take that to court and good luck making a case. You can't do anything besides screaming on the web or, don't know, kicking a few butts. You change your name.

pocksuppet1 day ago
You can trademark your open source project, but only the biggest projects do.

You used to be able to buy yourname .com, .net, .org and that was a de facto trademark. Now there are gTLDs you can't.

renegat0x01 day ago
- I think I was upset when Google allowed fake ad for VLC to appear high in ranking

- I hate that Google returns content farms instead of product web pages

- I hate that Google provides a page of 10 useful links, later links are just pure garbage. I think that something in Google engine is profoundly broken

- I maintain my own search index, but it requires a lot of effort, and attention. I do insert links if I find them worthy. I think more people should have their personal search indexes. Mine is below. I am quite happy that problems like these do not affect me that much

https://github.com/rumca-js/Internet-Places-Database

michaelcampbell1 day ago
> I think that something in Google engine is profoundly broken

Optimizing for ad revenue is a good start.

tmaly1 day ago
Wasn't one of the original ideas of NFT was to essentially identify the original creator?
iamacyborg1 day ago
Google is absolutely idiotic sometimes.

We (as in the team that helped fork and migrate the PoE1 wiki) setup a new domain for the Path of Exile 2 wiki, which is being hosted by the folks at Grinding Gear Games and linked on the official website and in multiple places on the highly trafficked subreddit.

Despite this, Google has decided that the site is not relevant and shouldn't appear anywhere in search results, despite the wiki for the first game appearing everywhere.

boredhedgehog1 day ago
> The person running nanoclaw[.]net can put anything they want on that page tomorrow. A crypto scam. A phishing page. Malicious download links. They could fork the GitHub repo, inject malicious code, and link to it from the site that Google is telling thousands of people is legitimate.

A lot of handwringing about hypotheticals. The page is up there because it links the official repo. Changing that will quickly tank its search rank.

jagermo19 hours ago
kagi has you on first place, and the github project as #2.
rocketvole1 day ago
i think orcasclicer suffers from the same issue. Not really sure why some oss projects struggle with this issue and others don't (notepad++)
TabTwo1 day ago
Wait till you learn about companies replacing the opensource parts of their stack/products with something an AI coding agent produced. They do this to get rid of all the burden that comes with using opensource like risking to get sued if they dont ship the source code according to the licence. This is why sboms are a hot topic right now. Also coding agents are now good and cheep enough to do this.
barelysapient1 day ago
The more things change the more they stay the same.
keiferski1 day ago
Suddenly the pre-Google Yahoo model of curated links is starting to seem relevant again.

Curation in general is probably a skill that will become more and more in demand as the Internet fills up with AI slop.

roywiggins1 day ago
Unfortunately everyone here is terrible at curation, because this post is itself LLM output.
keybored1 day ago
Live by bots, die by bots.
Imustaskforhelp1 day ago
Duckduckgo actually shows nanoclaw.net as the first result and the github page as second.

Another point but DDG's AI feature actually references Nanoclaw.net as a source.

Damn I booted up Orion (Kagi) and even Kagi shows nanoclaw.net as the third result after the github page with qwibitai and another github page with your (previous?) github username ie gavrielc which when clicked on also results to the same github page.

There is an interesting find page in kagi which references the website but it still shows nanoclaw.net page earlier and the nanoclaw.dev interesting find shows the .dev domain barely that in first time I didn't even notice it.

I expected it better from DDG/Kagi to be honest. I also tried brave and it had the same issue. Brave even is its own independent index and even that struggles with.

Let's hope that this can quickly get patched though. Also a good reminder to people to prefer opening up github links than websites as I must admit that even as a tech-savvy person I could've fallen for nanoclaw.net link as well given its second in like all search engines.

cainetighe1 day ago
We can fix this quickly at DuckDuckGo, and we will for organics. I suspect part of the problem is I am seeing a TLS issue with the nanoclaw.dev site.
jimminyx1 day ago
Can you please share the details with me so I can fix? gavriel@qwibit.ai or https://x.com/Gavriel_Cohen
Imustaskforhelp1 day ago
Awesome! I am a big fan of DDG. I am happy I could help you guys. Another minor tidbit but please also remove DDG AI summary about nanoclaw referencing the .net if you do take some action about it.

I have also written a more detailed comparison comparing all search providers that I could find, perhaps it might be of interest to ya but only Mojeek/(yandex.ru with the nanoclaw.dev/ru) were able to reference it earlier than .net

I have been an happy user of DDG for many time. I trust DDG significantly more than Google and I am happy that you guys could read such feedback!

Have a nice day DDG team!

cainetighe1 day ago
SearchAssist is fixed, organics are taking a bit longer. Thanks again for the report, we should hopefully have the latter resolved by EoD.
cainetighe1 day ago
This should now be done on organics and search assist. Thanks again!
Imustaskforhelp1 day ago
I actually tried giving the query and can confirm. Searching nanoclaw has now removed nanoclaw.net from the search (although nanoclaw.dev hasn't come in the search results but I suppose that can happen organically)

I am not the creator of nanoclaw or even related to it but I really appreciate how the DDG team took my feedback. Thanks to you as well!

> Thanks again!

Don't mind me if I use this comment (ie. Got thanked by Duckduckgo team for helping them) in anything like a resume haha. I am half joking but although small, I think that (resume?)/something similar could reflect why I love privacy services and if an employer can be right minded, it can give more talking points and maybe even a discussion starter. So I might be only half joking when I say this haha!

I am really happy too that I can be of help. I love the work done at Duckduckgo. Truly one of the few companies that I root for honestly. I use you guys everyday* and I love y'all.

It's truly a pleasure from my side as well that I could help Duckduckgo team, you guys have been quick in acting on the feedback!

Most Privacy conscious user really love and appreciates Duckduckgo imo, myself included.

I hope you guys have a nice day! Take care!

absqueued1 day ago
So did the Startpage for me! My faith is both domain being super new, it will resolve itself in weeks/month time.
dumbfounder1 day ago
DMCA?
pocksuppet1 day ago
No copyright violation was mentioned here, but it's not a crime to submit a DMCA notice anyway because you don't know the difference between copyright and trademark. If you do know the difference, then it becomes a crime to submit a DMCA notice about something you know a DMCA notice isn't for, so don't read this comment before you submit one.
Drupon1 day ago
Sorry Gavriel Cohen, but this Google search placement was promised to the other person thousands of years ago.
shevy-java1 day ago
I've noticed this a few years ago. Google has been ruining its search engine deliberately so. I could explain the things Google did here, but other websites and videos already explain it, including the why (though there is some speculation as to why).

These days I even find e. g. qwant sometimes having better results than google search. I see it as a positive thing though - I can soon stop using Google search. So one less Google product. One day I will be Google free. It will be a happy day. I really think Google must cease to exist.

(The only sad thing is how crap the other search engines are. So while Google search sucks nowadays, I consistently get even worse results with e. g. DuckDuckGo. And I think part of the reason is because the world wide web also sucks a LOT more compared to the old days. Google is also partially responsible for this by the way, which just reinforces the idea that Google must die.)

Imustaskforhelp1 day ago
Another comment here but here are all the search engines I looked at:

1. DDG 2. Kagi 3. Brave 4. Ecosia 5. Startpage 6. Marginalia 7. Mojeek 8. Yandex.ru

from 1-5 all referenced .net before .dev and DDG referenced .net before github , marinalia didn't give me either .net, .dev or gh link but rather docker.com or some other tech articles

Mojeek and Yandex.ru DID give me .dev links before .net at the time of writing.

I literally opened these two as a joke especially Mojeek not expecting too much But I just know names of lots of search engines so I tried.

Mojeek and Yandex.ru have surprised me although I think yandex.ru might have referenced the .dev because of https://nanoclaw.dev/ru/ as it points to this.

Mojeek seems interesting now from this observation

I also wanted to try swisscows but looks like they have become 100% premium as I do remember being able to search for free but now a popup comes.

I also tried baidu (chinese search engine) and it gave results in chinese and firefox translate sort of stuttered and didn't work when I tried to translate, I don't know chinese so pasted it in claude and it doesn't link to either .net or .dev but rather chinese links.

Now with all of this observation, I think that we do know one Provider (Mojeek) who won. A lot of these on these lists are actually not independent except Mojeek and brave and probably yandex.ru

SO I guess the main takeaway from this could be that Independent search engines can be interesting. They can still be hit or miss but the more independent search engines the merrier given that some might miss but some will also hit.

My comment definitely feels like a good reputation bonus for mojeek. Well anything for more independent search engines imo. I looked at their about me and it seems that they are a single person (Marc Smith). Fascinating stuff

I know marginalia_nu is on hn so maybe marginalia and mojeek can share some index together. Anyways this was a fun exciting experiment to do. I hope the community tries out other search engines if I may have missed any and share insights if a particular search engine gives interesting results.

roywiggins1 day ago
I think you put more effort into this comment than the entire OP, which was clearly written by Claude.
Imustaskforhelp1 day ago
Now that does say something about the world, doesn't it?

I think this had just made me curious so yeah haha

I mean one thing I am not understanding is why they would write an article with AI tho. They still prompted AI, might as well give us what they prompted or just write under <300 words or less. I mean its literally twitter (refuse to call it X)

Or like make a 2 minute video with screenshare just talking to the camera about it like they might've with claude perhaps.

They also have discord, They could have literally given a free contributor to help write the article from such video or concerns and credit them properly. I mean, heck I could've written the article for free for just a credit at this point where I got so invested haha.

I genuinely don't understand why you would prompt an article/text out of all things with AI. I hope I never get persuaded with this dark side lol.

roywiggins1 day ago
My guesses in no particular order:

1) this style genuinely is preferred by lots of people on X/Twitter so you might as well lean into it

2) People who spend a lot of time with LLMs think this sort of writing is normal or even standard just through overexposure, a sort of pseudo social proof

2b) People who spend a lot of time with other people who use LLMs think this is how humans write (actual social proof)

3) People are insecure about their writing ability and find the non-judgmental non-human LLM editor soothing

4) people are lazy

5) people aren't lazy per se but they know writing has been so devalued that they aren't going to spend time on it that they don't need to

6) their first experience of writing was trying to hit word count requirements in grade school and that stuck

7) Visibly using LLMs is becoming a shibboleth for a social group on Twitter and LinkedIn. It's a marker that you are dogfooding the crappy AI tools you're developing and selling. Under this theory, being visibly LLM output is actually intentional: "look ma, no hands- all NanoClaw!"

Imustaskforhelp1 day ago
> 3) People are insecure about their writing ability and find the non-judgmental non-human LLM editor soothing

My writing style gets criticized. a lot (I think its from people who have good hearts who just want to point out some flaws and I appreciate that). So I will admit that I understand this point because if someone questions your writing style, you do get insecure and sometimes I did have thoughts of leaving hackernews because of it, because I mean I always took pride in all of my comments, they are mine after all :)

I don't think you can ever fix that, All AI does is remove that critique from you to LLM but I'd say that the largest reason people might do it is because its hard to respond to such criticism (IMO).

If suppose someone says your writing is bad. To me, it takes a huge mental effort to not be angry at the decision and type something. It takes me time to reflect and try to respond to them peacefully.

I think I am only able to do that because I imagine this as a person who has business and I imagine how I would want an ideal business or a person who has business would want to reply and how it would look on the business. I have witnessed some businesses who are absolutely top notch but their responses/nature in forums sometimes is very off-putting. I'd rather try to do opposite.

And to me, its those particular comments that I write that I cherish the most. I had once written a comment which felt so good to me personally from a criticism that I seriously wondered how I wrote that. For a few days, I can't say for sure but I remember just looking up at that comment whenever I felt bad.

The one thing I agree is that it can be very time consuming tho to respond to such criticism.

I mean, I try to respond to these comments nicely but that doesn't mean I am not insecure about my writing. I do think that I may project that if I write a nice comment but yeah, I believe everyone can be insecure about writing to some degree. And chances are that most people are more likely to create a ruckus of the situation than handle it well.

So I think from all of this, if I had to summarize it, I'd like it if people could share their concerns but in a way which is agreeable. If you don't like someone's writing, try to point it out in a way of feedback/cooperation that the other person I can agree in.

If you do want to point out someone's writing, try to imagine yourself being in their situation and try to anticipate what message might be the most beneficial/(cooperative?) in that sense. Just imagine yourself in their shoes basically.

So I do agree with you on this point. Perhaps point 5) as well because this comment took me 40 mins to write and think.

It's also how time is invested, like people rather use their 40 mins to create a project which can reach x stars on github and that will have some definite measure. Whereas this comment got no measure in like, the value right now but I like to think that given enough long time, if I ever create anything. These comments could be meaningful in that regards to show what I think maybe.

Another part is that I can't stand obnoxious reddit/twitter. Those algorithms feel flawed to me and I'd rather not contribute to that machine and the funny thing is that the above line of thinking might be more beneficial in those platforms than here given that they are mainstream but yeah.

More than anything, I just write because I find these topics interesting to type about or that, I write for myself, I wish to read these comments I type in future to really see what I was thinking about stuff. Kinda like a journal and twitter/reddit platforms are less intended for such long comments than HN and tbh HN can have its limits too but I think the community overall is much more receptive of long comments.

(Imagine if I wrote you such a long post on a random subreddit or in twitter, those platforms are less likely to capture nuance imo)

Edit: were these the best 40 minutes I have spent, probably not, that was playing skribble with my friend yesterday but like I did get a comment permanently about a particular topic I can reference anywhere in a discussion and it was interesting to think about it. But if a person doesn't care about it or the community doesn't do backlash about AI writing and those were your points. So yeah I do agree with you more and more thinking about it honestly.

To some people, it could be an interesting tradeoff to spend less time thinking or writing but I mean, that doesn't feel right to me, especially if you are passionate about something I guess.

DeathArrow1 day ago
>We trust Google to surface reliable information about elections. Vaccines. Medical conditions. Financial decisions. And they can't get this right?

Actually I don't trust Google and I don't expect it to surface reliable information. I expect it to surface information and I will dig through it and judge for myself whether it is reliable or not.

jongjong1 day ago
Google should just make their algorithms open source so at least scammers don't have the edge over legitimate project; which is the current reality.
MagicMoonlight1 day ago
A guy that stole someone else’s idea by making a shinier website getting mad that someone stole his idea by making a shinier website. Such is life.
AlexeyBelov23 hours ago
Grifter when getting outgrifted: :o
devld20 hours ago
I'm sorry to hear that. But this is another reason why .com is king.
imp0cat1 day ago
It's simple really, .net > .dev.
ChrisArchitect1 day ago
Two weeks? Hardly enough for the correct url to take over. A correct url with no history/presence that came out of nowhere as far as the engine is concerned. It will happen most likely tho, thanks to the links from the project etc, but might take a bit of time since the other url is established. "losing the battle" now perhaps, but not for long most likely.
newswasboring1 day ago
I fell for this yesterday, but for zeroclaw not nanoclaw. I found this website[1] through brave search I think. I was not paying too much attention as I was under the influence, it points to the wrong repo[2] and instructions install from that. I didn't like zeroclaw anyways so I tried to uninstall it and only then realized i'm on a forked repo.

[1] https://zeroclaw.net/ [2] https://github.com/openagen/zeroclaw

csomar1 day ago
It’s worse. I wrote about this a couple weeks ago [1]. With AI responses and Google pulling results from different sources, you could potentially hijack other brands with your own fake content (ie: phone number).

1: https://codeinput.com/blog/google-seo

yieldcrv1 day ago
Gavriel is freaking out over nothing while making rookie mistakes pretending not to be in an SEO war

It's literally not his problem that some people click a scam link, he still has 18,000 github stars, its just a bifurcated audience of undiscerning people

He's overly worried about a perfect unanimous impression when he shouldn't

Now he's wasting his money on SEO tweaks and domain names while saying he only wants to code, then focus on coding! not buying obscure TLD's and vibecoding sitemaps while wondering what he did wrong

yeesh, some people can't handle a little fame

OsrsNeedsf2P1 day ago
What a terrible take. OP spent a lot of time making his project, and now someone else is impersonating them and trashing their reputation with ads. Of course they have reason to be upset.
yieldcrv1 day ago
being upset is a feeling, panic buying domains is an action