I don't actually mind AI-aided development, a tool is a tool and should be used if you find it useful, but I think the vibe coded show HN projects are overall pretty boring. They generally don't have a lot of work put into them, and as a result, the author (pilot?) hasn't generally thought too much about the problem space, and so there isn't really much of a discussion to be had.
The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had. It was a real opportunity to learn something new, to get an entirely different perspective.
I feel like this is what AI has done to the programming discussion. It draws in boring people with boring projects who don't have anything interesting to say about programming.
One of the great benefits of AI tools, is they allow anyone to build stuff... even if they have no ideas or knowledge.
One of the great drawbacks of AI tools, is they allow anyone to build stuff... even if they have no ideas or knowledge.
It used to be that ShowHN was a filter: in order to show stuff, you had to have done work. And if you did the work, you probably thought about the problem, at the very least the problem was real enough to make solving it worthwhile.
Now there's no such filter function, so projects are built whether or not they're good ideas, by people who don't know very much
People who got "enabled" by AI to produce stuff, just need to learn to keep their "target audience of one"-projects to themselves. Right now it feels like those fresh parents who show every person they meet the latest photos / videos of their baby, thinking everybody will find them super cute and interesting.
Yeah, I think it's sort of an etiquette thing we haven't arrived at yet.
It's a bit parallel to that thing we had in 2023 where dinguses went into every thread and proudly announced what ChatGPT had to say about the subject. Consensus eventually become that this was annoying and unhelpful.
The other element here is that the vibecoder hasn't done the interesting thing, they've pulled other people's interesting things.
Let's see, how to say this less inflamatory..
(just did this) I sit here in a hotel and I wondered if I could do some fancy video processing on the video feed from my laptop to turn it into a wildlife cam to capture the birds who keep flying by.
I ask Codex to whip something up. I iterate a few times, I ask why processing is slow, it suggests a DNN. I tell it to go ahead and add GPU support while its at it.
In a short period of time, I have an app that is processing video, doing all of the detection, applying the correct models, and works.
It's impressive _to me_ but it's not lost on me that all of the hard parts were done by someone else. Someone wrote the video library, someone wrote the easy python video parsers, someone trained and supplied the neural networks, someone did the hard work of writing a CUDA/GPU support library that 'just works'.
I get to slap this all together.
In some ways, that's the essence of software engineering. Building on the infinite layers of abstractions built by others.
In other ways, it doesn't feel earned. It feels hollow in some way and demoing or sharing that code feels equally hollow. "Look at this thing that I had AI copy-paste together!"
To me, part of what makes it feel hollow is that if we were to ask you about any of those layers, and why they were chosen or how they worked, you probably would stumble through an answer.
And for something that is, as you said, impressive to you, that's fine! But the spirit of Show HN is that there was some friction involved, some learning process that you went through, that resulted in the GitHub link at the top.
I knew i could do better so i made a version that is about 15kb and solves a fundamental issue with web gl context limits while being significantly faster.
AI helped do alot of code esp around the compute shaders. However, i had the idea of how to solve the context limits. I also pushed past several perf bottlenecks that were from my fundamental lack of webgpu knowledge and in the process deepened my understanding of it. Pushing the bundle size down also stretched my understanding of js build ecosystems and why web workers still are not more common (special bundler setting for workers breaks often)
Btw my version is on npm/github as chartai. You tell me if that is ai slop. I dont think it is but i could be wrong
I have yet to see any of these that wouldn’t have been far better off self-hosting an existing open source app. This habit of having an LLM either clone (or even worse, cobble together a vague facsimile) of existing software and claiming it as your own is just sort of sad.
I actually came to this realization recently. I'm part of a modding community for a game, and we are seeing an influx of vibe coded mods. The one distinguishing feature of these is that they are entirely parasitic. They only take, they do not contribute.
In the past, new modders would often contribute to existing mods to get their feet wet and quite often they'd turn into maintainers when the original authors burnt out.
But vibe coders never do this. They basically unilaterally just take existing mods' source code, feed this into their LLM of choice and generate a derivative work. They don't contribute back anything, because they don't even try to understand what they are doing.
Their ideas might be novel, but they don't contribute in any way to the common good in terms of capabilities or infrastructure. It's becoming nigh impossible to police this, and I fear the endgame is a sea of AI generated slop which will inevitably implode once the truly innovative stuff dies and and people who actually do the work stop doing so.
That's the essence of the corporations behind these commercial products as well. Leech off of all the work of others and then sell a product that regurgitates that said work without attribution or any back contribution.
To be fair, one probably needs at least one idea in order to build stuff even with AI. A prompt like "write a cool computer program and tell me what it does" seems unlikely to produce something that even the author of that prompt would deem worthy of showing to others.
It often is. The concept of "gatekeeping" becoming well known and something people blindly rail against was a huge mistake. Not everything is for everyone, and "gatekeeping" is usually just maintaining standards.
Ideally the standard would just be someone's genuine interest in a project or a hobby. In the past, taking the effort to write code often was sufficient proof of that.
AI agent coding has introduced to writing software a sort of interaction like what brands have been doing to social media.
I'm not sure what distinction you're trying to make, but it seems like you might be distinguishing between keeping out substandard work versus keeping out the submitters.
In which case, I kinda disagree. Substandard work is typically submitted by people who don't "get it" and thus either don't understand the standard for work or don't care about meeting it. Either way, any future submission is highly likely to fail the standard again and waste evaluation time.
Of course, there's typically a long tail of people who submit one work to a collection and don't even bother to stick around long enough to see how the community reacts to that work. But those people, almost definitionally, aren't going to complain about being "gatekept" when the work is rejected.
Agreed, and were gonna see this everywhere that AI can touch. Our filter functions for books, video, music, etc are all now broken. And worst of all that breaking coincides with an avalanche of slop, making detection even harder.
There is this real disconnect between what the visible level of effort implies you've done, and what you actually have to do.
It's going to be interesting to see how our filters get rewired for this visually-impressive-but-otherwise-slop abundance.
My prediction is that reputation will be increasingly important, certain credentials and institutions will have tremendous value and influence. Normal people will have a hard time breaking out of their community, and success will look like acquiring the right credentials to appear in the trusted places.
That's been the trajectory for at least the last 100 years, an endless procession of certifications. Just like you can no longer get a decent-paying blue collar job without at least an HS diploma or equivalent, the days of working in tech without a university education are drying up and have been doing so for a while now.
I think the recent past was a respite in very specific contexts like software maybe. Others, like most blue collar jobs, were always more of an apprentice system. And, still others, like many branches of engineering, largely required degrees.
I have a sci-fi series I've followed religiously for probably 10 years now. It's called the 'Undying Mercenaries' series. The author is prolific, like he's been putting out a book in this series every 6 months since 2011. I'm sure he has used ghost writers in the past, but the books were always generally a good time.
Last year though I purchased the next book in the series and I am 99% sure it was AI generated. None of the characters behaved consistently, there was a ton of random lewd scenes involving characters from books past. There were paragraphs and paragraphs of purple prose describing the scene but not actually saying anything. It was just so unlike every other book in the series. It was like someone just pasted all the previous books into an LLM and pushed the go button.
I was so shocked and disappointing that I paid good money for some AI slop I've stopped following the author entirely. It was a real eye opener for me. I used to enjoy just taking a chance on a new book because the fact that it made it through publishing at least implied some minimum quality standard, but now I'm really picky about what books I pick up because the quality floor is so much lower than in the past.
People will build AI 'quality detectors' to sort and filter the slop.
The problem is of course it won't work very well and will drown all the human channels that are trying to curate various genres. I'm not optimistic about things not all turning into a grey sludge of similar mediocre material everywhere.
> so projects are built whether or not they're good ideas
Let’s be honest, this was always the case. The difference now is that nobody cares about the implementation, as all side projects are assumed to be vibecoded.
So when execution is becoming easier, it’s the ideas that matter more…
This is something that I was thinking about today. We're at the point where anyone can vibe code a product that "appears" to work. There's going to be a glut of garbage.
It used to be that getting to that point required a lot of effort. So, in producing something large, there were quality indicators, and you could calibrate your expectations based on this.
Nowadays, you can get the large thing done - meanwhile the internal codebase is a mess and held together with AI duct-tape.
In the past, this codebase wouldn't scale, the devs would quit, the project would stall, and most of the time the things written poorly would die off. Not every time, but most of the time -- or at least until someone wrote the thing better/faster/more efficiently.
How can you differentiate between 10 identical products, 9 of which were vibecoded, and 1 of which wasn't. The one which wasn't might actually recover your backups when it fails. The other 9, whoops, never tested that codepath. Customers won't know until the edge cases happen.
It's the app store affect but magnified and applied to everything. Search for a product, find 200 near-identical apps, all somehow "official" -- 90% of which are scams or low-effort trash.
Sure, there's many examples (I have a few personal ones as well) where I'm just building small tools and helpers for myself which I just wouldn't have done before because it would take me half a day. Or non-technical people at work that now just build some macros and scripts for Google Sheets that they would've never done before to automate little things.
” The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.”
— Tom Cargill, Bell Labs
Some day I’m going to get a crystal ball for statistics. Getting bored with a project was always a thing— after the first push, I don’t encounter like 80% of my coding side projects until I’m cleaning— but I’ll bet the abandonment rate for side projects has skyrocketed. I think a lot of what we’re seeing are projects that were easy enough to reach MVP before encountering the final 90% of coding time, which AI is a lot less useful for.
> I’ll bet the abandonment rate for side projects has skyrocketed
My experience is the opposite. It’s so much easier to have an LLM grind the last mile annoyances (e.g. installing and debugging compilation bullshit on a specific raspberry pi + unmaintained 3p library versions.)
I can focus on the parts I love, including writing them all by hand, and push the “this isn’t fun, I’d rather do something else” bits to a minion.
You both have very good points here, but once I get finished with both of the 90% programming times, and everything seems to finally work with no more bugs (and it's true), then for my heavy industry work I look forward to spending 10X as much effort testing compared to coding.
I think that's a fear I have about AI for programming (and I use them). So let's say we have a generation of people who use AI tools to code and no one really thinks hard about solving problems in niche spaces. Though we can build commercial products quickly and easily, no one really writes code for difficult problem spaces so no one builds up expertise in important subdomains for a generation. Then what will AI be trained on in let's say 20-30 years? Old code? It's own AI developed code for vibe coded projects? How will AI be able to do new things well if it was trained on what people wrote previously and no one writes novel code themselves? It seems to me like AI is pretty dependent on having a corpus of human made code so, for example, I am not sure if it will be able to learn how to write very highly optimized code for some ISA in the future.
>So let's say we have a generation of people who use AI tools to code and no one really thinks hard about solving problems in niche spaces.
I don't think we need to wait a generation either. This probably was a part of their personality already, but a group of people developers on my job seems to have just given up on thinking hard/thinking through difficult problems, its insane to witness.
> Then what will AI be trained on in let's say 20-30 years? Old code? It's own AI developed code for vibe coded projects?
I’ve seen variation of this question since first few weeks /months after the release of ChatGPT and I havent seen an answer to this from leading figures in the AI coding space, whats the general answer or point of view on this?
Is it hard to imagine that things will just stay the same for 20-30 years or longer? Here is an example of the B programming language from 1969, over 50 years ago:
printn(n,b) {
extrn putchar;
auto a;
if(a=n/b) /* assignment, not test for equality */
printn(a, b); /* recursive */
putchar(n%b + '0');
}
You'd think we'd have a much better way of expressing the details of software, 50 years later? But here we are, still using ASCII text, separated by curly braces.
I observed this myself at least 10 years ago. I was reflecting on what I had done in the approximately 30 years I had been programming at that time, and how little had fundamentally changed. We still programmed by sitting at a keyboard, entering text on a screen, running a compiler, etc. Some languages and methodologies had their moments in the sun and then faded, the internet made sharing code and accessing documentation and examples much easier, but the experience of programming had changed little since the 1980s.
Exactly. Prose, code, visual arts, etc. AI material drowns out human material. AI tools disincentivize understanding and skill development and novelty ("outside the training distribution"). Intellectual property is no longer protected: what you publish becomes de facto anonymous common property.
Long-term, this is will do enormous damage to society and our species.
The solution is that you declare war and attack the enemy with a stream of slop training data ("poison"). You inject vast quantities of high-quality poison (inexpensive to generate but expensive to detect) into the intakes of the enemy engine.
We create poisoned git repos on every hosting platform. Every day we feed two gigabytes of poison to web crawlers via dozens of proxy sites. Our goal is a terabyte per day by the end of this year. We fill the corners of social media with poison snippets.
This will happen regardless. LLMs are already ingesting their own output. At the point where AI output becomes the majority of internet content, interesting things will happen. Presumably the AI companies will put lots of effort into finding good training data, and ironically that will probably be easier for code than anything else, since there are compilers and linters to lean on.
I've thought about this and wondered if this current moment is actually peak AI usefulness: the snr is high but once training data becomes polluted with it's own slop things could start getting worse, not better.
I see a lot of projects repeated: screen capture tool, LLM wrapper, blog/newsletter, marketing tool for reddit/twitter, manage social media accounts. These things have been around for a while so it is really easy for an LLM to spit them out for someone that does not know how to code.
It's because the common belief that you should build copies of whatever SaaS makes decent money. What they don't mention is that people need to have a very good reason why they decide to go for your bare-bones MVP instead of a well-established solution.
Agreed. I'm over here working on Quake 2 mods and reverse engineering Off world trading company so I can finish an open source server for it using AI.
Thing is I worked manually on both of these a lot before I even touched Claude on them so I basically was able to hit my wishlist items that I don't have time to deal with these days but have the logic figured out already.
> author (pilot?) hasn't generally thought too much about the problem space
I’ve stopped saying that “AI is just a tool” to justify/defend its use precisely because of this loss of thought you highlight. I now believe the appropriate analogy is “AI is delegation”.
So talking to the vibe coder that’s used AI is like talking to a high level manager rather than the engineer for human written code
My favorite part about people promoting (and probably vote stuffing) their closed-source non-free app that clone other apps is when people share the superior free alternatives in the comments.
As someone who posts blogs and projects out of my own enjoyment, no AI for code generation, handed edited blog, I still have no idea how to signal to people that I actually know what I’m talking about. Every step of the process could’ve been done by an LLM, albeit worse, so I don’t have a way of signifying my projects as something different. Considering putting a “No LLMs used in this project” tag at the start but that feels a little tacky.
Communicating that you know what you are talking about and that you're different is a lot of work. I think being visibly "anti-AI" makes you look as much of an NPC as someone who "vibe coded XYZ." It takes care, consistency and most of all showing people something they've never seen before. It also helps to get in the habit of doing in person demos, if you want to win hackathons it really helps to be good at (1) giving demos on stage and (2) have a sense of what it takes to make something that is good to demo.
I have two projects right now on the threshold of "Show HN" that I used AI for but could have completed without AI. I'm never going to say "I did this with AI". For instance there is this HR monitor demo
which needs tuning up for mobile (so I can do an in-person demo to people who work on HRV) but most all being able to run with pre-recorded data so that people who don't have a BTLE HR monitor can see how cool it is.
Another thing I am tuning up for "never saw anything like this" impact is a system of tokens that I give people when I go out as-a-foxographer
I am used to marketing funnels having 5% effectiveness and it blows my mind that at least 75% of the tokens I give out get scanned and that is with the old conventional cards that have the same back side. The number + suit tokens are particularly good as a "self-working demo" because it is easy to talk about them, when somebody flags me down because they noticed my hood I can show them a few cards that are all different and let them choose one or say "Look, you got the 9 of Bees!"
I had a similar thought way back when. It goes back to what is important to the person reviewing it be it the style, form or just whether it works for their use case. In the case of organic food, I did not even know I was living living a healthy lifestyle until I came to US. But now organic is just another label, played by marketing people just like anything else.
As I may have noted before, humans are the problem.
I added the following at the top of the blog post that I wrote yesterday: "All words in this blog post were written by a human being."
I don't particularly care if people question that, but the source repo is on GitHub: they can see all the edits that were made along the way. Most LLMs wouldn't deliberately add a million spelling or grammar mistakes to fake a human being... yet.
As for knowing what I'm talking about. Many of my blog posts are about stuff that I just learned, so I have many disclaimers that the reader should take everything with a grain of salt. :-) That said: I put a ridiculous amount of time in these things to make sure it's correct. Knowing that your stuff will be out there for others to criticize if a great motivator to do your homework.
You’re not actually at risk of being labeled as LLM user until someone comes and make that claim about your work. So my advice is to not try to fight a preemptive battle on your tone and adjust when/if that day comes.
Side note: I’d think installing Anubis over your work would go a long way to signaling that but ymmv.
> I still have no idea how to signal to people that I actually know what I’m talking about.
presumably if this is true, it should be obvious by the quality of your product. If it isnt, then maybe you need to need to rethink the value of your artisanal hand written code.
I think that the problem is that LLMs are good at making plausible-looking text and discerning if a random post is good or bad requires effort. And it's really bad when signal-to-noise ratio is low, due to slop being easier to make.
I predict that now that coding has become a commodity, smart young people drawn to technical problem-solving will start choosing other career paths over programming. I just don't know which ones, since AI seems to be commoditizing every form of engineering work.
When I was growing up (millennial) it seemed to me that the default for smart young people drawn to technical problem solving was something like aerospace, software or hardware was more or less a fun hobby, like it was for Steve Wozniak. Nobody cared whether or which of these were a commodity, which is what happens when you actually enjoy something.
These days I do see a lot of people choosing software for the money. Notably, many of them are bootcamp graduates and arguably made a pivot later in life, as opposed to other careers (such as medicine) which get chosen early. Nothing wrong with that (for many it has a good ROI), but I don’t think this changed anything about people with technical hobbies.
When you’re young, you tend not to choose the path the rest of your life will take based on income. What your parents want for you is a different matter…
I have a project that I'm hoping to launch on show HN in the next few days which was built entirely with the help of AI agents.
It's taken me about month; currently at ~500 commits. I've been obsessed with this problem for ~6 weeks and have made an enormous amount of progress, but admittedly I'm not an expert in the domain.
Being intentionally vague, because I don't want to tip my hand until it's ready. The problem is related to an existing open source tool in a particular scientific niche which flatly does not work on an important modern platform. My project, an open source repo, brings this important legacy tool to this modern platform and also offers a highly engaging visual demo that is of general interest, even to a layperson not interested in programming or this particular scientific niche.
I genuinely believe I have something valuable to offer to this niche scientific community, but also as a general interest and curiosity to HN for the programming aspects (I put a lot of thought into the architecture) as well as the visual aspects (I put a lot of thought into the design and aesthetics).
Do you have any advice on how to present this work in a compelling way to people who understandably feels as burned out on AI slop as you do?
i think that there are a few distinct usecases for ShowHN that lead to conflicting visions:
* some people want to show off a fun project/toy/product that they built because it's a business they're trying to start and they want to get marketing
* some people want to show off a fun project/toy/product that they built because it's involves some cool tech under the hood and they want to talk shop
* some people want to show off a fun project/toy/product that they built because it's a fun thing and they just want some people to have fun
I had a light bulb come on reading your comment. Yes! When I read Show HN posts that are clearly missing key information, it makes me care less because the author didn’t care to learn the space they’d like to play in.
This, but unironically. Every submission should have an "AI?" checkbox, to indicate if the content submitted is about AI or made by AI, because I'm just absolutely fed up with 2/3 of HN front page being slop or meta-slop.
I'm not an anti-AI luddite, but for gods sake talk about (ie. submit) something else!
Having too may subs could get out of hand, but sometimes you end up with so much paperwork generated so fast that it needs its own dedicated whole drawer in your filing cabinet ;)
Sorry about that, didn't mean to hurt anybody's feelings :(
It's still early and easy to underestimate the number of visitors who would absolutely love to have the main page more covered in absolute pure vibe than it is recently.
I would like to hear opinions as to why the non-human touch is preferred, that could add something that not many are putting into words.
Hopefully it's not a case of the lights being on but nobody's home :(
One thing about vibe coding is that unless you are an expert in what you have vibe coded, you have no idea if it actually works properly, and it probably doesn't.
Worse yet, if you're not an expert(with autodidacts potentially qualifying), your ideas won't be original anyway.
You'll be inventing a lot of novel cicular apparatus with a pivot and circumferencrial rubber absorbers for transportation and it'll take people serious efforts to convince you it's just a wheel.
I shared a well-thought vibecoded app on ShowHN last month. It took a few hours to get POC and two weeks to fully develop a product to meet my requirements. Nobody cared.
The problem is we're all stuck in a cut throat game of musical chairs in an eroding industry, with almost all organic platforms locked down and billion dollar orgs trying their damnedest to funnel you into pay2play.
I mean it's a real problem, but it's also a solved problem, and also not a problem that comes up a lot unless you're doing the sort of engineering where you're using a CAD tool already.
I don't doubt it's useful, and seems pretty well crafted what little I tried it, but it doesn't really invite much discussion.
> I think the vibe coded show HN projects are overall pretty boring.
Agreed. r/ProgrammingLanguages had to deal with this recently in the same way HN has to; people were submitting these obviously vibecoded languages there that barely did anything, just a deluge of "make me a language that does X", where it doesn't actually do X or embody any of the properties that were prompted.
One thing that was pointed out was "More often than not the author also doesn't engage with the community at all, instead they just share their project across a wide range of subreddits." I think HN is another destination for those kinds of AI slop projects -- I'm sure you could find every banned language posted on that forum posted here.
Their solution was to write a new rule and just ban them outright. Things have been going much better since.
Yes, we need to do something about this and tomhow and I are talking about it - it's not clear yet what.
Raising the quality bar would likely cut down on quantity as a side effect, and that would be a nice solution. One idea that a user proposed is a review queue where experienced HN users would help new Show HN submitters craft their posts to be more interesting and fit HN's conventions more.
I recommend making "What are you working on [this weekend/weekly]" official like whoishiring and encouraging pre-Show HN comments there. (https://news.ycombinator.com/item?id=47041973#47043174 "what's the right venue for sharing [LLM-built side projects]")
Also requiring disclosure of the use of AI in repos and especially (or perhaps specifically discouraging its use) when responding with comments to HN feedback.
I'll take this opportunity to strongly encourage sharing prompts (the newest tier of software source code) as the logical progression of OSS adding additional value to Show HN.
Combining the two approaches might work. A "pre-moderation queue" for submissions that are are solid enough to pass the "Show" bar, and then the monthly "what are you working on" threads as a more free-form creative outlet.
And yes, disclosing the use of AI should be par for the course.
My new favorite thing to do is grab the 'What are you working on threads' and have a LLM group them categorically with one line descriptions of each app.
Interesting take on the sharing of prompts. I don’t think this is a bad idea. How would this work though given different prompts occur in different context windows?
Internally, we have a standard that any AI written code simply includes a cut-and-paste of the chat prompt (if that were used), and/or the .md files (if those were used).
I would like to see it as extended comments in each git commit. There have been a few examples of some doing so manually but it needs to be supported by the tooling... with all the half-arsed "standards" like MCP etc. I'm surprised there isn't something already.
Sharing prompts, not sure it works if your project required hundreds of prompts? It’s all in history though (.jsonl) so I’m sure the AI can condense it somehow.
What I see is new users who are trying to share something without having yet understood HN. I get the impression that they think of "Show HN" as no different than "Show and Tell", and that putting the label on their post is communicating the message of "Here is something I want people to see", instead of "Here is something you can try out".
So while I understand that new features on HN are few and far between, a quick validation of "Show HN" posts that says, "I see you are trying to post a Show HN..." with some concise explanation of the guidelines might help. I want to believe that most new users mean well, they just need better explanations.
> What I see is new users who are trying to share something without having yet understood HN.
From their perspective, HN is another place to post and get views on their project, part of a check list for their "launch" or whatever, not everything comes from within the ecosystem.
Some posts their projects then never reply to any of the comments, while for me (and many others I bet) half the reason of posting a Show HN is because I'm looking for participating in discussions about my thing and understanding different perspectives thinking about it too.
> I want to believe that most new users mean well, they just need better explanations.
Yeah, so far the only thing I know of is the "Please read the Show HN rules and tips before posting" blurb on the /show list, and the separate pages. Maybe some interstitial or similar if the title prefix-matches with "Show HN" could display the rules, guidelines and "netiquette" more prominently and get more people to be aware of it.
Not sure if it would work for HN / how it could be adapted to HN, but something I noticed on opensource projects, is that once they hit a hurdle, submitters of low quality AI-written PRs don't try to solve it and go elsewhere.
For example, in one project, PRs have to be submitted to the "next" branch and not the default branch. This is written in the CONTRIBUTING.md file, which is linked in the PR template, with the mention that PRs that don't respect that will be close. Most if not all submitters of low-quality PRs don't do anything once their initial PR is closed.
Pretty bummed about that as I just submitted a show HN I'm pretty happy about (it solves an annoying problem I had for years, which I know many people have) and I was looking forward to talk about it (https://news.ycombinator.com/item?id=47050872)
Back when I ran a WoW guild, the first sentence in our recruitment post emphasized the importance of reading the whole post (because the way to access the application form was to click the only smiley in the post, and this detail was mentioned in the last paragraph).
Most people did not read the post, which was immediately evident from how they posted their application by copy-pasting and editing an application posted by someone else before them.
> For example, in one project, PRs have to be submitted to the "next" branch and not the default branch. This is written in the CONTRIBUTING.md file, which is linked in the PR template, with the mention that PRs that don't respect that will be close. Most if not all submitters of low-quality PRs don't do anything once their initial PR is closed.
Few things in life are as reliable and trustworthy as the laziness of others.
How about inverting the issue, highlight posts with an opt in label. e.g
Show HN [NOAI]:
Since it's too controversial to ban LLM posts, and would be too easy for submitters to omit an [LLM] label... Having an opt in [NOAI] label allows people to highlight their posts, and LLM posts would be easy to flag to disincentivise polluting the label.
This wouldn't necessarily need to be a technical change, just an intuitive agreement that posts containing LLM or vibe coded content are not allowed to lie by using the tag, or will be flagged... Then again it could also be used to elevate their rank above other show HN content to give us humanoids some edge if deemed necessary, or a segregated [NOAI] page.
[edit]
The label might need more thought, although "NOAI" is short and intelligible, it might be seen as a bit ironic to have to add a tag containing "AI" into your title. [HUMAN]?
I'm 90% sure this will end with endless squabbles who's right that the label is correct/incorrect, rather than actual conversations about what the project that the person is showing. It already happens without the labels, feels like it'd increase the frequency of that even more if this label gets enforced.
Is the problem that the app was written with AI assistance or that it's low-effort/bad? I don't care if you used Claude to fix a bug or something if you have a cool app, but i do care if you vibe coded something I could've vibe coded in an hour. That's boring.
Feels like effort needs to be the barrier (which unfortunately needs human review), not "AI or not". In lieu of that, 100 karma or account minimum age to post something as Show HN might be a dumb way to do it (to give you enough time to have read other people's so you understand the vibe).
In my opinion, for open-source projects, scoring the project's AI sloppiness based on the timeline of commits would be a good indicator. If it's completed within a few days, it should require more thorough human review. On the other hand, if the project has been active for a while and received contributions spread throughout that timeline, I think that would indicate accumulated effort (human and/or AI) and higher quality.
Show HN has never been restricted to open source projects and it would be weird to make the criteria more restrictive for open source than closed source work.
I thought so too, untill I looked for 1-point Show HN posts with a repo with a long commit history. Some of these are really cool (see my article), but others were not compelling at all, at least to me.
Eh IMO any metric like this can be gamed. My project that reached hn front page was coded in a short time (and yes some ai was used), but otoh I think it was something that showed hey you can do this really interesting thing (in my case vlm based indoor location).
Also its not uncommon for weekend projects to be done in a shprt span with just a "first commit" message dump even pre-AI.
Yes, any metric can be gamed. But I believe measuring the entropy of a repository, comparing state of the code-base over time can be done deterministically, which would make it harder to game it.
So either we are going to completely avoid automation and create a community council to decide what deserves to be shown to rest of the community or just let best AI models to decide if a project is worth show up on front page?
Isn't it possible to fabricate the timestamps on commits and then push them up all at once? If you're planning on literally checking that the commits are publicly available for a certain amount of time, that seems like it would needlessly punish projects that someone worked on offline and then happened to push up once it was completed.
Unfortunately there's now a whole cotton-industry of "vibe-coding classes and marketing" (similar to "life-coaching" on socials) that probably target HN as well. I think HN needs to think a layer of abstraction "higher" and model around some collection of semantics/metrics that allow to filter out "gloss without quality" vaporware or voting ring tactics.
Once some users have extra power to push content to the front-page, it will be abused. There will be attempts to gain that privilege in order to monetize, profit from or abuse it in some other way.
The only option along this path would probably be to keep the list of such users very tightly controlled and each vouched for individually.
---
Another approach might be to ask random users (above certain karma threshold) rank new submissions. Once in a while stick a showhn post into their front page with up and down arrows, and mark it as a community service. Given HN volume it should be easy to get an average opinion in a matter of minutes.
I'm haunted by the criticism Dropbox received from HN users when they posted their project here. While I respect the views many of us have, I think this has the potential to have the StackOverflow effect where the community makes the whole process miserable and worse.
Note well that the most famous example of this is a misreading that has snowballed into a kind of cultural legend. The thread in question was about Dropbox's application to YC, not the value of Dropbox itself, and the feedback was constructive and well-intentioned.
on problem here is that I don't want to share my ShowHN projects with my HN account because it'll connect my real identity to my pseudonymous HN identity.
so, in the past, i've created throwaway HN accounts for sharing things that connect to my real ID.
I'm not entirely sure that's a bad thing. If you aren't proud enough of it to attach your real name, or your pseudonymous account, maybe it shouldn't be posted.
It would filter out some people who lurk but still have interesting things to contribute, but despite the drawbacks that threshold is probably the most immediately impactful solution. Of course, it strongly incentivizes the purchase/selling of accounts, and karma farming, but that problem is perhaps less of a problem than all high-effort human content getting completely drowned out. There are already a plethora of spambots making comments getting upvoted, so it's not like that problem doesn't already exist either.
I just have no interest in seeing code that hasn't even been read by the author. At that point, just show us what it does, don't show us the GitHub link.
On the front page, someone made a cool isometric NYC map via vibe coding - another front pager was someone who also claimed to make an ultra fast PDF parser that failed on very common PDFs and gamed the speed metric by (useless) out of order parsing.
Guess which one I installed and spent more time using? These vibe coded projects aren't interesting for their code and not intended to be used by anyone if they're libraries, but the applications made with vibe coding are often very cool.
An easy win is turning off the firehouse of vibe coded GitHub portfolio projects and just ask for a link to a hosted application. Easy.
I’m in some discord servers that have both #art and #ai-art channels. This seems to work well. It’s not perfect but it’s cheap and might be good as a start.
I wonder if some kind of voluntary tagging system could help?
e.g. [20h/2d/$10] could indicate "I spent 20 human-hours over 2 days and burned $10 worth of tokens" (it's hard to put a single-dimensional number on LLM usage and not everyone keeps track, but dollars seem like a reasonable approximation)
To my dismay, the trajectory of Show HN posts looks eerily familiar. ProductHunt followed a similar course (albeit with much more acceleration) and now is just a feed of slop. The signal to noise ratio became so meaningless that I lost all interest. I fear this happening to HN. Any attempt to slow this down is welcome.
I wonder how will this review system work. Perhaps, a Show HN is hidden by default and visible to only experienced HN users who provide enough positive reviews for it to become visible to everyone else. Although, this does sound like gatekeeping to me and may starve many deserving Show HN before they get enough attention.
Funny a year ago I used to hear from so many people who thought a Product Hunt launch was the same as a marketing plan but it's been a while since I've heard about Product Hunt...
- Min. 90 days account existence in order to submit
- Cap on plain/Show/Ask HN posts per week
Most of the spam I see in /new or /ask is from fresh accounts. This approach is simple and awards long-term engagement/users while discouraging fly-by-night spammers.
I've loved some of the vibe coded apps that are hosted somewhere that have made the front page, but a lot of the links to GitHub projects intended to farm stars for throwaway portfolio padding (which often don't work).
Something based on the principles of 'New'? (not clear on the details of how Show HN works, does it automatically appear?). Just shove entries under 'New' and let the group decide what is "Show HN"-worthy.
Every system can be gamed, but if it were me and I were looking for a simple filtering solution, I would do something like this ..
Set a policy of X comments required per submission in the last 30 days (not counting last 24 hours) for all submissions, not just "Show HN:" posts.
Meaning, users would need to post X comments before they could post a submission and by not counting the last 24 hours, someone couldn't join, post X comments and immediately post a submission.
It would limit new submission posts to people who are active in the community so they would be more familiar with the policies and etiquette of HN along with gaining an idea of what interests its members.
One thing I noticed recently while going through several of the Show HN submissions was that a lot of the accounts had been created the same day the submission was made.
My guess is HN has become featured on a large number of "Where do I promote/submit my _____?" lists in blogs, social media, etc. to the point that HN is treated like a public bulletin board more than a place to share things with each other in the community.
I love the Show HN section because so many interesting things get posted there but even I have cut back on checking it lately because there are simply too many things posted to check out.
Maybe restrict Show HN posts to 70 or 100 characters - readers can then scan many, and quickly find stuff of interest.
The clarity and focus this discipline would enforce could have a pleasant side effect of enabling a kind of natural evolution of categorizations, and alternative discovery UIs.
May be dont show them under "Show HN" unless the post has accumulated a certain number of points/upvotes ? Just like a regular new post comes on front page.
I think this already happens but the threshold might be pretty low - there's a separate shownew page and if it gets sufficient upvotes it appears in the regular "Show HN".
> One idea that a user proposed is a review queue where experienced HN users would help new Show HN submitters craft their posts to be more interesting and fit HN's conventions more.
HN has a vouch system. Make a Show HN pool, allow accounts over some karma/age level to vouch them out to the main site. I recently had a naive colleague submit a Show HN a week or so ago that Tom killed... for good reason. I told the guy to ask me for advice before submitting a FOSS project he released and instead he shit out a long LLM comment nobody wants to read.
The HN guidelines IMO need a (long overdue) update to describe where a Show HN submission needs to go and address LLM comments/submissions. I get that YC probably wants to let some of it be a playground since money is sloshing around for it, but enough is enough.
Maybe you should train an LLM to judge the content /s
More seriously though, I think some sort of curation is unavoidable with such topics. If you get inspired by stack overflow where you have some similar mechanics at work, then I'd say that is not too bad. But of course you risk some people being angry about why their amazing vibe coded app is not being shown. Although the more I think of it, this might be a good thing.
Edit: One more thought just came to my mind. A slight modification to the curation rule, you let everything through, just like now. However, the posts are reviewed and those with enough postive review votes get marked in some shape or form, which allows them to be filtered and/or promoted on the show page.
I launched an idea 75 days ago, here as Show HN. It snowballed into a little community and a game that now sells every day. Maybe not an overnight sensation but the encouragement I found in the community was the motivation that i needed to take it further to a bigger audience.
It was not just a product launch for me. I was, sort-of in a crisis. I had just turned 40 and had dark thoughts about not being young, creative and energetic anymore. The outlook of competing with 20 year old sloptimists in the job market made me really anxious.
Upon seeing people enjoying my little game, even if it's just a few HNers, I found an "I still got it" feeling that pushed me to release on Steam, to good reviews.
It was never about the money, it was about recovering my self confidence. Thank you HN, I will return the favour and be the guy checking the new products you launch. If Show HN is drowning, i will drown with it.
I missed your Show HN, but I got the game now. Looks fun, and the fact that each citizen is simulated reminds me of Banished, which I enjoyed playing! Was happy to spend some wallet money I got from CSGO cases.
Thank you for making it, and don't give up. Passion and vision > vibe coding sloptimists.
Wow so cool! I think a big part of the Show HN slop are GitHub links or libraries that haven't even been read or used by their authors outside of the test suite on their local machine.
I'm sure a happy medium is shutting off links to vibe coded source code, and only letting vibed hosted applications or websites. For us who want to read code, source code that means nothing to anyone is pretty disappointing for a Show HN.
Your site https://microlandia.city and OP blog https://www.arthurcnops.blog/death-of-show-hn/ as well as most personal sites posted on HN are almost always inaccessible via my corporate job's firewall. Does anyone know why this is? Something with security certificate? They're not explicitly blocked, because you get a "this is blocked" page in that case. With these sites they just show a "can't connect" error.
I did a Show HN a few years ago on another account. It got no upvotes but that website/app has generated over $6m in revenue in that time (over $4.5m profit). Not sure what my point is but thought I'd share
This happens all the time, it’s a good thing to really keep in mind. All of my best projects were dismissed initially and continue to be. There is a reddit post where I announced Blockheads to a handful of “looks like a crappy Minecraft ripoff” comments. It went on to be played by 50 million people.
What the actual f. I am one of those 50 million people, played Blockheads all the time as a teen and had no idea I would randomly stumble over the dev around here. Great work indeed, thanks for the fond memories I made on my 4th gen iPod touch, playing that game :)
It’s different every time, but basically “marketing”. No matter where you are showing your stuff, it’s in a subset of the population, chances are HN won’t be buying your app subscription. You need to get it in front of your actual audience.
See also "Product Hunt". Oddly it's been about a year since I've noticed anybody who mistakes a Product Hunt launch for a marketing plan but that used to be endemic.
I think HN is a very particular group of people and not representative of the market for a lot of the products we make. We tend to like open-source things, ask lots of technical questions and complain about minute things. Also, Show HNs tend to perform better if they are quick to use (no sign in required, don't need to download, etc.).
There's software for running your own program so they handle most of that except the actual payment part, monitoring for fraud, etc. Plus they don't give any visibility to your program which a network would help with to an extent
30%. But it brought tons of word of mouth and such after the ball got rolling so the total affiliate commission compared to our revenue lifetime is closer to 10-15%
Similar experience. I posted a Show HN two days ago for a children's book generator - type a story idea, get a fully illustrated printed book shipped to you. Offered a free printed book including shipping to the HN community via voucher code. Got 7 points, 2 comments, and zero voucher redemptions. Nobody even ordered the free book.
One of those comments was genuinely useful feedback from Argentina about localization. That alone made it worth posting. But the post was gone from page 1 in what felt like minutes.
What's interesting is this isn't a weekend vibe-coded project - it involves actual physical production, printing, and shipping. But from the outside it probably looks like "another AI wrapper," which I think is the core problem: the flood of low-effort AI projects has made people reflexively skeptical of anything that mentions generation, even when there's real infrastructure behind it.
If you don't mind some unsolicited and blunt feedback: I suspect the reason this didn't get a lot of traction is that it is unclear why customers would want this. Sorry, that's probably too harsh, but I find it difficult to imagine anyone buying this:
- Children's books, at least the well-reviewed ones, are pretty good
- This is AI generated, so I expect the quality to be significantly lower than a children's book. Flipping through the examples, I am not convinced that this will be higher quality than a children's book.
- At 20 euros for a paperback, this is also more expensive than most children's books
- Your value prop, as I take it, is that your product is better because it is a book generated for just one child, but I am not convinced that's a solid value prop. I mean, it is kind of an interesting gimmick, but the book being fully AI generated is a large negative, and the book being uniquely created for my kid is a relatively smaller positive.
Those are definitely the highiest-order bits you need to prove to me in order to get traction. A couple of smaller things you should fix as well:
- As an English speaker, almost all the examples are not in English. You should take a reasonable guess at my language and then show me examples in my language
- It's difficult to get started: "Create your own book" leads to a signup page and I don't want to go through that friction when I am already skeptical
Thanks for the blunt feedback - genuinely appreciate it.
You're right that children's books can be excellent, and for generic topics a well-reviewed book from a skilled author and illustrator will beat what we generate. No argument there.
Where we see real value is in the gaps the publishing industry doesn't serve. Bilingual families who can't find books in Maltese/English or Estonian/German. A child with an insulin pump who wants to see a superhero like them. A kid processing their parents' divorce. A child with two dads, or being adopted, or starting at a new school in a country where they don't speak the language yet. No publisher will print a run of one for these families - but these are exactly the stories that matter most to them.
On the UX points - you're right on both. We should localize the showcase to your language, and the signup wall before trying is too much friction. Working on both.
Personally, I find AI generated text disengaging. A also heard some professional writers immediately notice disorganized storytelling patterns. Have you found a way to fix this? Is there any soul in generated texts?
AI art is massively downvoted here and on Reddit, but boomers on facebook seem happy to share it. So I think you'll do better on other platforms. The opinion of AI generated creative work is just very low here. I personally agree, I've never seen an AI generated story that was interesting and I don't want to expose my children to it. I'd rather they get real stories written by real people.
It's definitely been amplified severely by agent coding, but what's worse is that the most meme hustle-culture part of bringing ideas to life has been the most magnified, because it's the easiest. I started paying less and less attention when the whole "just get yourself a mailing list to test there's a market for your product" meme started gaining popularity, but people were at least constrained by the time it took to cobble together a generic landing page with an email signup. Now there's effectively no limit to how many shadcn boilerplate email collectors can be tossed together in a night. The more of that redundant stuff gets put on Show HN, the less I check it, but also the less I trust the "products" or the potential products, and that doesn't seem like it would be in anyone's best interest.
Give people the ability to submit a “Show HN” one year in advance. Specifically, the user specifies the title and a short summary, then has to wait at least year until they can write the remaining description and submit the post. The user can wait more than a year or not submit at all; the delay (and specifying the title/summary beforehand) is so that only projects that have been worked on for over a year are submit-able.
Alternatively, this can be a special category of “Show HN” instead of replacing the main thing.
I'd push back on this and say that the #1 problem with the discourse about AI now (e.g. why I'd almost never upvote a blog post about AI coding) is that it is too focused on 2026-02-17. That is, I could care less about optimizing to pick the best model or agentic workflow because it's all going to be obsolete in a year.
I am wary of blogs by celebrity software managers such as DHH, Jeff Atwood, Joel Spolsky, and Paul Graham because they talk as if there was something about their experience in software development and marketing except... there isn't.
The same is true for the slop posts about "How I vibe coded X", "How I deal with my anxiety about Y" and "Should I develop my own agentic workflow to do Z?" These aren't really interesting because there isn't anything I can take away from them -- doomscrolling X you might do better because little aphorisms like "Once your agent starts going in circles and you find yourself arguing it you should start a new conversation" is much more valuable than "evaluations" of agents where you didn't run enough prompts to keep statistics or a log of a very path-dependent experience you had. At least those celebrity managers developed a product that worked and managed to sell it, the average vibe coder thinks it is sufficient that it almost worked.
An additional factor missing in the post I think Is AI.
Before, projects were more often carefully human crafted.
But nowadays we expect such projects to be "vibe coded" in a day. And so, we don't have the motivation to invest mental energy in something that we expect to be crap underneath and probably a nice show off without future.
Even if the result is not the best in the world, I think that what interest us is to see the effort.
Had a funny experience with this some weeks ago. I started developing a small side project and after a week I wondered if this existed already. To my surprise, someone had already built something relatively similar _with the exact same name_ (though I had chosen mine as a placeholder, still funny though) only 2 weeks before, and posted it in Show HN.
I took a look at the project and it was a 100k+ LoC vibe-coded repository. The project itself looked good, but it seemed quite excessive in terms of what it was solving. It made me think, I wonder if this exists because it is explicitly needed, or simply because it is so easy for it to exist?
The signal/noise problem here cuts both ways. Yes, good projects get buried. But the reason they get buried is that there is genuinely more noise, not less attention from the community.
When I launched a side project a couple years ago, getting to the front page felt like a real achievement requiring weeks of iteration and genuine problem-solving. Now you can vibe-code something in a weekend and post it. The median Show HN quality has dropped, so people naturally vote less aggressively on the category as a whole.
The 37% stuck at 1 point stat is the real story. The solution is not changing HN mechanics. It is people being more selective about what they post - and the community being more willing to say "this is not ready" in the comments rather than just silently scrolling past.
Perhaps it's the right moment to start an AI Show HN (Vibe HN as recommended above), as I assume more than half of Show HN is now from ChatGPT/Claude, and it's impossible to cut through this noise with something reliable that humans craft over years.
It's fair to give the audience a choice to learn about an AI-created product or not.
HN already has a mechanism for flagging posts. If flagging low effort/trivial show HN posts were normalized, I suspect it would work just fine: if a human can't easily decide whether or not to flag it, they likely wouldn't.
As dang posted above, I think it's better to frame the problem as "influx of low quality posts" rather than framing policies having to do explicitly with AI. I'm not sure I even know what "AI" is anymore.
I have talked with some friends who are long-time programmers (20+ experience). Even they (all) admit that they use Claude Code, OpenAI Codex, Google Antigravity or AI Studio - you name it.
So in future everything’s gonna be “agentic”, (un)fortunately.
Everytime I write about it, I feel like a doomsayer.
Anthropic admits that LLM use makes brain lazy.
So as we forgot remembering phone numbers after Google and mobile phones came, it will be probably with coding/programming.
OK, let's say there are two categories of software now.
One is where the human has a complete mental map of the product, and even if they use some code generating tools, they fully take responsibility for the related matters.
And there is another, emerging category, where developers don't have a full mental map as it was created by an LLM, and no one actually understands how it works and what does not.
I believe these are two categories that are currently merged in one Show HN, and if in the first category I can be curious about the decisions people made and the solutions they chose, I don't give a flying fork about what an LLM generated.
If you have a 'fog of war' in your codebase, well, you don't own your software, and there's no need to show it as yours. Same way, if you had used autocomplete, or a typewriter in the time of handwriting, and the thinking is yours, an LLM shouldn't be a problem.
> And there is another, emerging category, where developers don't have a full mental map as it was created by an LLM, and no one actually understands how it works and what does not.
I work with a large number of programmers who don't use AI and don't have an accurate mental map for the codebases they work in...
I don't think AI will make these folks more destructive. If anything, it will improve their contributions because AI will be better at understanding the codebase than them.
Good programmers will use AI like a tool. Bad programmers will use AI in lieu of understanding what's going on. It's a win in both cases.
> And there is another, emerging category, where developers don't have a full mental map as it was created by an LLM, and no one actually understands how it works and what does not.
Are the tokens to write out design documentation and lots of comments too expensive or something? I’m trying to figure out how an LLM will even understand what they wrote when they come back to it, let alone a human.
You have to reify mental maps if you have LLM do significant amounts of coding, there really isn’t any other option here.
There's a difference between not knowing the internals of a dependency you chose deliberately and not understanding the logic of your own product.
When you upgrade a library, you made that decision — you know why, you know what it does for you, and you can evaluate the trade-offs before proceeding (unless you're a react developer).
That's not a fog of war, that's delegation.
When an LLM generates your core logic and you can't explain why it works, that's a fundamentally different situation. You're not delegating — you're outsourcing the understanding, and that makes the result not yours.
No. I see this mistake everywhere. You're confusing "knowing everything" or "making assumptions" with "mental maps". They are not at all the same thing.
The benefit of libraries is it's an abstraction and compartmentalization layer. You don't have to use REST calls to talk to AWS, you can use boto and move s3 files around in your code without cluttering it up.
Yeah, sometimes the abstraction breaks or fails, but generally that's rare unless the library really sucks, or you get a leftpad situation.
Having a mental map of your code doesn't mean you know everything, it means you understand how your code works, what it is responsible for, and how it interacts with or delegates to other things.
Part of being a good software engineer is managing complexity like that.
I don't think so. At the end of the day it's just a tool.
Case in point: aside from Tabbing furiously, I use the Ask feature to ask vague questions that would take my coworkers time they don't have.
Interestingly at least in Cursor, Intellisense seems to be dumbed down in favour of AI, so when I look at a commit, it typically has double digit percentage of "AI co-authorship", even though most of the time it's the result of using Tab and Intellisense would have given the same suggestion anyway.
Maybe. It's basically "people who know how shit works" vs "people who don't know how shit works". I hope we still have at least some people in category 1 or else we just end up with Wall-E.
The worst part of the death of Show HN is that most of these people are so allergic to putting any effort in that they can't even write the description themselves. The repo's readme, the ShowHN post, and often even their comments will all be fully LLM-generated. This doesn't even take skill! Writing good marketing copy might take skill, but ShowHN isn't (supposed to be) marketing. Just describe the project in your own words, I promise it's not that hard. The bar is so low that even copy-pasting whatever you prompted to the LLM would be more interesting than the LLM's output. Although maybe it's better this way, since it makes it easier to filter out the garbage instantly.
> often even their comments will all be fully LLM-generated
This really bothers me, coming here asking for human feedback (basically: strangers spending time on their behalf) then dumping it into the slop generator pretending it is even slightly appreciated. It wouldn't even be that much more work to prompt the LLM to hide its tone (https://news.ycombinator.com/item?id=46393992#46396486) but even that is too much.
How many non-native English speakers are on HN? If it's more than 30%, why should they have to use a whole new language if they can just let an LLM do it in a natural sounding way.
Reminds me of the quote: "Nobody Goes There Anymore, It’s Too Crowded"
Some of it is "I wish things I think are cool got more upvotes". Fare enough, I've seen plenty of things I've found cool not get much attention. That's just the nature of the internet.
The other point is show and share HN stories growing in volume, which makes sense since it's now considerably easier to build things. I don't think that's a bad thing really, although curation makes it more difficult. Now that pure agentic coding has finally arrived IMO, creativity and what to build are significantly more important. They always were but technical ability was often rewarded much more heavily. I guess that sucks for technical people.
> "I wish things I think are cool got more upvotes"
HN has a very different personality at weekends versus weekdays. I tend to find most of the stuff I think is cool or interesting gets attention at the weekends, and you'll see slightly more off the wall content and ideas being discussed, whereas the weekdays are notably more "serious business" in tone. Both, I think, have value.
So I wonder if there's maybe a strong element of picking your moment with Show HN posts in order to gain better visibility through the masses of other submissions.
Or maybe - but I think this goes against the culture a bit - Show HN could be its own category at the top. Or we could have particular days of the week/month where, perhaps by convention rather than enforcement, Show HN posts get more attention.
I'm not sure how workable these thoughts are but it's perhaps worth considering ways that Show HN could get a bit more of the spotlight without turning it into something that's endlessly gamed by purveyors of AI slop and other bottom-feeding content.
I think it's just numbers. There are maybe a few dozen people that see your post on /new. That's a tiny sample size, not a good proxy for how interesting the post is. You see this on Reddit as well where the same exact post gets 1 upvotes and then finally blows up.
Chasing clout through these forums is ill advised. I think people should post, sure. But don't read into the response too much. People don't really care. From my experience, even if you get an insanely good response, it's short lived, people think its cool. For me it never resulted in any conversions or continued use. It's cheap to upvote. I found the only way to build interest in your product is organic, 1 on 1 communication, real engagement in user forums, etc.
It's a good reminder that instead of hitting "refresh" on HN, hit up /new for a bit and drop some votes. Probably the most significant votes you'll have in a longtime.
This is a forum called Hacker News. It’s for technical people. Perhaps these LLM-generated slop projects could get posted on Product Hunt or somewhere focused on the creative product side of tech and not technical knowledge and discussion
This is part of a bigger problem with vibe coding IMO. It's not just Show HN but signaling credentials in general. How would you signal that you actually put effort into your project on a resume or social event/presentation when others could just vibe-code some good looking but nonetheless unusable projects and show that off instead?
I've thought about this. Even in the pre-LLM era, projects were rarely judged by the quality of their source code. READMEs and slick demos were the focus. So in some sense nothing has changed.
The difference now is that there is even less correlation between "good readme" and "thoughtful project".
I think that if your goal is to signal credentials/effort in 2026 (which is not everyone's goal), a better approach is to write about your motivations and process rather than the artefact itself - tell a story.
The framing of "Is Show HN dead?" misses something fundamental. Show HN was never a separate product. It's just a tag on the same ranking algorithm that handles everything else. Stories rise and fall by the same gravity formula, and Show HN posts compete directly with major tech news, drama, and viral essays.
I've launched multiple side projects through Show HN over the years. The ones that got traction weren't better products. They hit the front page during a slow news hour and got enough early upvotes to survive the ranking curve. The ones that flopped were arguably more interesting but landed during a busy cycle. That's not a Show HN problem, that's a single-ranking-pool problem.
What would actually help is a separate ranking pool for Show HN with slower time decay. Let projects sit visible for longer so the community can actually try them before they drop off. pg's original vision was about making things people want. Hard to evaluate that in a 90-minute window.
I wrote an internal engine combustion sim in C with what I'd assume is some pretty alright procedural audio generation and posted it to Show-HN (https://github.com/glouw/ensim4) with which I got 2 upvotes. I understand it's niche, but I thought HN loved this sorta demoscene stuff.
C'est la vie and que sera. I'm sure the artistic industry is feeling the same. Self expression is the computation of input stimuli, emotional or technical, and transforming that into some output. If an infallible AI can replace all human action, would we still theoretically exist if we're no longer observing our own unique universes?
Tried to use Show HN for my new project a couple months ago with almost no traction. It's a software literacy tutor, so I guess it's not the right audience, but my intuition aligns with this. For reference, an earlier post showing the practice engine that powers the literacy tutor did pretty well back in 2023 and it was my first post. I've had more success getting sign ups trying to do just the tiniest bit of SEO.
Basically: More people are having more ideas that they’re able execute to at least a minimal degree. That doesn’t seem bad, but like an editor’s slush pile yeah- things are gonna get lost in the noise.
The small indie developer ain't dead yet, and from where I sit you could drive a star destroyer through the gaps in what software has been built so far.
It's only that you can't claim any of the top shelf prizes by vibe coding
Vibe coding as a term is really annoying. At what point does a project stops being considered vibe coded? If I spent a year iterating on a design and implementation using Claude code, in a domain I’m an expert in, will that still be considered vibe coded?
I'll counter you this: I don't use AI at all, but in a way I am a vibe coder, even though I'm five years of full time work into one OSS project. I have no code review. I move fast and ship bugs. I roll forward.
I see no reason to disrespect your work from what you say, but I also see no reason that AI would be much help to you after you had been learning for a year. If you are in the loop, shouldn't this be just about the moment when your growing abilities start to easily outpace the model's fixed abilities?
I view it as do you have a full mental model of the code base.
If you do then not vibe coded.
For me, I have different levels of vibes:
Some testing/prototyping bash scripts 100% vibe coded. I have never actually read the code.
Sometimes early iterations, I am familiar with general architecture, but do not know exact file contents.
Sometimes I have gone through and practically rewrote a component from scratch either because it was too convoluted, did not have the perfect abstraction I wantet/etc.
For me the third category is not vibe coded. The first 2 are tech debt in the making.
vibe coding is no-look coding, it's largely being replaced by agents that do the iteration to the point no human is involved beyond initial project description like "Build me a web browser"
>At what point does a project stops being considered vibe coded?
Good question, and by the same "token" when does it start?
Maybe if there's no possible way the creator could have written it by hand, perhaps due to almost complete illiteracy to code in any language, or something like that, it would be a reference point for "pure vibe". If the project is impressive, that's still nothing to be ashamed of. Especially if people can see the source code.
All kinds of creative people I see are mostly no dummies and it might be better than nothing for them to honestly rate their own submissions somewhere on the scale from pure vibe to pure manual?
With no stigma regardless, and let the upvotes or downvotes from there give an indication of how accurate the self-assessments are. Voting directly to Show HN could even have a different "currency" [0] to help regulate the fall of Show submissions, where a single upvote could mean something like infinitely more than zero.
I'm not disappointed by a project purely vibed by somebody like a visual artist, storyteller, or business enthusiast who has never written a line of code, as long as it is astoundingly impressive, in the league of the better projects, those I would like to take a look at.
I also see real accomplished coders guide their agents to arrive at things that wouldn't be as nice if they didn't have years of advanced manual ability beforehand.
Plus I think I'm in the vast majority and have no interest in "slop", in a way that aligns with so many kinds of people who are also turned off.
But so far, the best definition we have for slop is "we know it when we see it".
Oh, well that's all I've got, so far :)
[0] slop vs non-slop which is like pass/fail, or even a numerical rating could be on the "ballot".
I had a similar experience trying to get feedback on my attempt to help different role families adopt AI evals as a common language (hands on tutorial or tool comparison).
I attribute it mostly to my own inability to pitch something that is aimed for many audiences at once and needs more UX polishing and maybe a bit on timing.
It's tough when you're not looking to sell a product but moreso engage in a community without going the twitter/bluesky route (which I'll bregudgingly may start using).
Maybe evals is a problem that people don't have yet because they can just build their custom thing or maybe it needs a "hey, you're building agent skills, here's the mental model" (e.g. https://alexhans.github.io/posts/series/evals/building-agent... ) and once they get to the evals part, we start to interact.
In any case, I still find quite a lot of cool things in SHOW HN but the volume will definitely be a challenge going forward.
The legend says SHNs are getting worse, but surely if the % of SHN posts with 1 point is going DOWN (as per graph) then it's getting better? Either I am dense or the legends are the wrong way round no?
The long-term trend (ie since 2023) is for more ShowHN posts to be stuck at 1 point compared with normal posts, and for that gap to be growing. This implies that people find the ShowHNs to be less and less interesting.
Crazy idea, but maybe change the rules of Show HN so that you are required to include in the headline how long you have been working on the project. As an example, something like:
Show HN: My Project - A description for my vibe coded project [3 weeks]
A lot of the good stuff I see on Show HN are projects that have been worked on for a long time. While I understand that vibe coding is newer trend, I also know that vibe coded projects are less likely to stand the test of time. With this, we don't have to worry about whether a project is AI assisted or not, nor do we ban it. Instead just incentivize longer term projects. If the developer lies about how long they worked on the project, they will get reported and downvoted into oblivion.
I don’t agree with this drowning sentiment. It’s much easier now to build capable stuff. That’s what the data is showing you. Pre AI nostalgia - sure, I built a PPC profiler in assembly by hand, but who am I to say the latest AI induced gadgetry is not as cool. And I am an active participant.
The fact that the volume is exploding but the graveyard is also exploding, is a sign that the system is working, not that it's broken (the filter is working).
I did 3 ShowHN in 2024 (outside of the scope of this analysis), one with 306 points, another with 126 points and the third with... 2. There's always been some kind of unpredictability in ShowHN.
But I think the number one criteria for visibility is intelligibility: the project has to be easy to understand immediately, and if possible, easy to install/verify. IMHO, none of the three projects that the author complains didn't get through the noise qualify on this criteria. #2 and #3 are super elaborate (and overly specific); #1 is the easiest to understand (Neohabit) but the home page is heavy in examples that go in all directions, and the github has a million graphics that seem quite complex.
Tanget but related, I posted on Who wants to be hired and, in comparison to last time I posted there 2019ish, I received only spam emails, no offers nothing at all. Luckily I used an alias for that post but I hate deleting it if anyone interested might come in
I'm not sure I agree with the Sideprocalypse idea linked in the blog. Granted, there is a lot of "hustle" content out there about how all you need to do is vibe code an existing business idea and pay for their SEO course. And if you're one of those people selling that, or one of the people believing it... well, play stupid games, win stupid prizes.
Where the vibe coders with their slop cannons aren't present though is in things that require hard won domain knowledge. IE, stuff that requires you to actually create a new idea, off an understanding of actual areas of need.
And that kind of thing probably isn't going to do well on Show HN, because your audience probably isn't on HN.
There should probably be "Show Vibe:" simply because this is something new. This is something radical.
I was a skeptic last year, and now... not so much. I am having Claude build me a distributed system from scratch. I designed it last week as I was admitting to myself the huge failure of my big "I love to code" project that I failed to get traction on.
It took me a week to even give the design to claude because I was afraid of what it meant. I started it last night, and my jaw is dropped. There is a new skill being grown right now, and it... is something.
It certainly isn't nothing, and I for one am curious to simply see what people are making with vibes alone. It's fascinating... and horrifying.
But, I have learned to silence that part of me that is horrified since the world never cared for what I find beautiful (i.e. terrible languages like JavaScript)...
I think it is true with any distribution channel. When people figure out that it works, then everyone ends up bombarding that channel till it saturates.
Vibe coding is not helping either, I guess. Now it is even cheaper to create assets for the distribution channel.
> Show HN of course isn't dead. You could even say it's more alive than ever.
You could argue it's dead in the sense of "dead internet theory". Yes, more projects than ever are being submitted, but they were not created by humans. Maybe they are being submitted by humans, for now.
I've long wanted something like Blog HN as a way to post things things that I wrote without feeling guilty of submitting my own site. Things that authors themselves write and post are often a good signal. But this should be completely separate from any new products, etc.
I think that Show HN should be used sparingly. It feels like collective community abuse of it will lead to people filtering them out mentally, if not deliberately. They're very low signal these days.
Get on the Fediverse. A hosted Mastodon account isn't that expensive, or you can get an account for free on just about any instance. Curate programming and developer accounts (there are tons.) Post your blog there.
I think vibe coding something and showing it off on Show HN is probably fine, but it boils my blood when people cannot even be bothered to write the post body themselves. If someone is using an AI generated post body and title that's usually a clear signal of slop for me. The post body is supposed to be part of the human connection element!
That's a meaningful sign to me too, except there are some brilliant tech people who mainly need all the help they can get just with their English.
Even before AI got so strong, some of the translations were fairly abnormal in their own way.
>The post body is supposed to be part of the human connection element!
I really think this is the best too :)
Maybe for the non-English speakers, or anyone really, if a project means a lot, have a number of people who are smart in different ways look over the text a number of times and help you edit beforehand.
To make sure it's what you the human want to really say at the time.
I built my share of AI stuff (although more using AI in the product than vibe coding ), so I won’t complain. But I did got frustrated when I recently posted a Show HN that I thought HN community would like and no one did.
It is a comeback from a post that stayed for a few hours in the front page a few years ago. Also, it is a useful, non-AI slop, free product. So when it got none upvotes it made me think how I don’t understand HN community anymore how I used to think I did.
Here is the post for the curious
Show HN: (the return of) Read The Count of Monte Cristo and others in your email
I don't think it's a case that no one liked it, there's just too much going on that it probably never came across the right eyes.
I linked one of my projects in a post and it got some really good responses. I did a bit more work on it and posted a Show HN thinking a few people might be interested but it got 0 traction.
I even made it a point to go on the new Show HN and checkout some peoples projects (how can I expect anyone to check mine out if I'm not doing the same) and it is hard to keep up.
I have another app that I've been working on for the past 3 months and whilst I want to do a Show HN to discuss how I built it, the moments I was banging my head on the wall working on a bug etc, I sadly wonder if there's any point.
I'm so disappointed about what happened to this industry. It's worse than I could have imagined.
The market is saturated with superficial solutions that look amazing at a glance but don't work at all in the medium or long term yet it doesn't matter at all; they don't even have an incentive to improve, ever, because the founder cashes out/exits before they need to worry about the stuff under the hood. Customer support is replaced by AI agents so nobody can feel the customer's pain anymore. Then investors find ways to financialize the product so that it doesn't depend on consumers anymore and can just tap into big contracts from big institutions... And yet they still spend big on ads, just to prevent new entrants from entering the market.
It reminds me of my time in crypto; the coins were sold as one thing but all the big well known projects barely had less than half of the features implemented (compared to what was advertised)... And 10 years in, most of those projects cashed out big time and still don't have the features promised. Many shut down completely. Doesn't matter. The whole thing existed and succeeded as a pure shell project.
Time for a new category? "Slop HN: Claude built this mini tool for me" - would be lol to see the "slop" in the header right in the middle of "show | jobs" -> "show | slop | jobs"
I really don’t care if something is built with AI or not, however, when I check out Show HN, I’m interested in seeing new and novel things. Clawntown, the Show HN this article is about, was neither new nor novel. It’s another clone of things that I choose not to use.
Yet most of the time, if I spend five minutes a day on Show HN, I’ll find something new that I find interesting. I wouldn’t say that Show HN is drowning, but creativity should be on life support. I’m sure that’s somewhat a generative AI problem, but they’re pretty good rubber ducks and so I’m surprised by how acute the issue has gotten so quickly.
This aligns with my experience. It's good to have it properly analyzed.
If this effect is noticeable on an obscure tech forum, one can only imagine the effect on popular source code forges, the internet at large, and ultimately on people. Who/what is using all this new software? What are the motivations of their authors? Is a human even involved in the creation anymore? The ramifications of all this are mind-boggling.
Sadly, this problem isn't specific to HN either, any reddit sub that is even remotely related to software is absolutely flooded with "look at my slop" posts.
It feels like the age of creating some cool new software on your own to solve a problem you had, sharing it and finding other people who had the same problem, and eventually building a small community around it is coming to a close. The death of open source, basically.
I am a major advocate for AI assisted development.
Having said that, it used to feel part of an exclusive club to have the skills and motivation to put a finished project on HN. For me, posting a Show HN was a huge deal - usually done after years of development - remember that - when development of something worthwhile took years and was written entirely by hand?
I don't mind much though - I love that programming is being democratized and no longer only for the arcane wizards of the back room.
> I don't mind much though - I love that programming is being democratized and no longer only for the arcane wizards of the back room.
Programming has long been democratized. It’s been decades now where you could learn to program without spending a dollar on a university degree or even a bootcamp.
Programming knowledge has been freely available for a long time to those who wanted to learn.
It's better if you don't have to learn to program to make applications.
In the future it will seem very strange that there was a time when people had to write every line of code manually. It will simply be accepted that the computers write computer programs for you, no one will think twice about it.
There's a difference between documentation and LLMs. An LLM can be your own personal tutor and answer questions related to your specific code in a way no documentation can. That is extremely helpful until you master the programming language enough.
Vibe coders aren’t interested in mastering a programming language, or even interested in programming. How can you master something you’re not even doing?
Programming has been democratized in terms of “time invested in programming” by AI, which has resulted in exactly what happens to any high-investment community when a tool-assisted method of avoiding that investment is developed. You could ask any newspaper or movie script submissions reviewer before AI what percent of what they receive as uninvited-submissions is even slightly worth their time and they’ll look at you with the deadest eyes in the world and say “zero percent”. What invention led to their industries being buried in meaningless (relative to pre-invention) submissions that took a thousandth of the effort to produce than they did prior to it, without the editorial staff being scaled accordingly? The typewriter.
The obvious counterpoint is that AO3 is brilliant, which it is: give people a way to ontologize themselves and the result is amazing. Sure, AO3 has some sort of make-integer-go-up system, but it reveals the critical defect in “Show HN”: one pool for all submissions means the few that would before have been pulled out by us lifeguards are more likely to drown, unnoticed, amidst the throngs. HN’s submissions model only scales so far without AO3’s del.icio.us-inherited tagging model. Without it, tool-assisted creative output will increasingly overwhelm the few people willing to slog through an untagged Show HN pool. Certainly I’m one of them; at 20% by weight AI submissions per 12 hours in the new feed alone, heavily weighted in favor of show posts, my own eyes and this post’s graphs confirm that I am right to have stopped reading Show HN. I only have so much time in my day, sorry.
My interest in an HN post, whether in new or show or front page, is directly proportional to how much effort the submitter invested in it. “Clippy, write me a program” is no more interesting than a standard HN generic rabble-rousing link to a GotHub issue or a fifty-page essay about some economics point that could have been concisely conveyed in one. If the submitter has invested zero personal effort into whatever degree of expression of designcraft, wordcraft, and code craft that their submission contains, then they have nothing to Show HN.
In the rare cases when I interact with a show post these days, I’ve found the submissions to be functionally equivalent to an AI prompt: “here’s my idea, here’s my solution, here’s my app” but lacking any of the passion that drives people to overcome obstacles at all. That’s an intended outcome of democratization, and it’s also why craft fairs and Saturday markets exercise editorial judgment over who gets a booth or not. It’s a bad look for the market to be filled with sellers who have a list of AI-generated memes and a button press, whose eyes only shine when you take out your wallet. Sure, some of the buttons might be cool, but that market sucks to visit.
Thus, the decline of Show HN. Not because of democratization of knowledge, but because lowering the minimum effort threshold to create and post something to HN reveals a flaw-at-scale of community-voting editorial model: it only works when the editorial community scales as rapidly as submissions, which it obviously has not been.
Full-text search tried to deprecate centralized editorial effort in favor of language modeling, and turned out to be a disastrous failure after a couple decades due to the inability of a computer to distinguish mediocre (or worse) from competent (or better). HN tried to deprecate centralized editorial effort and it has survived well enough for quite some time, but gestures at Show HN trends graphs it isn’t looking good either. Ironically, Reddit tried to implement centralized moderation on a per-community basis — and that worked extremely well for many years, until Reddit rediscovered why corporations of the 90s worked so hard to deprecate editorial staff, when their editors engaged in collective action against management (something any academic journal publisher is intimately familiar with!).
In that light, HN’s core principle is democratizing editorial review — but now that our high-skill niche is no longer high-skill, the submissions are flooding in and the reviewers are not. Without violating the site’s core precepts of submission egality and editorial democracy, I see no way that HN can reverse the trend shown by OP’s data. The AO3 tagging model isn’t acceptable as it creates unequal distinctions between submissions and site complexity that clashes with long-standing operator hostility towards ontologies. The Reddit and acsdemic journal editorial models aren’t acceptable as it creates unequal distinction between users and editors that clashes with long-standing operator hostility towards exercising editorial authority over the importance of submissions. And HN can’t even limit Show HN submissions to long-standing or often-participating users because that would prevent the exact discoveries of gems in the rough that show used to be known for.
The best idea I’ve got is, like, “to post to Show HN, you must make several thoughtful comments on other Show HN posts”, which puts the burden of editorial review into the mod team’s existing bailiwick and training, but requires some extra backend code that adds anti-spam logic, for example “some of your comments must have been upvoted by users who have no preexisting interactions with your comments and continued participating on the site elsewhere after they upvoted you” to exclude the obvious attack vectors.
I wouldn’t want to be in their shoes. A visionary founded left them a site whose continuing health turn out to hinge upon creating things being difficult, and then they got steamrolled by their own industry’s advancements. Phew. Good luck, HN.
> For me, posting a Show HN was a huge deal - usually done after years of development
This is still possible. Vibe coders are just not interested in working on a piece of software for years till it's polished. It's a self selection pattern. Like the vast amount of terrible VB6 apps when it came out. Or the state of JS until very recently.
The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had. It was a real opportunity to learn something new, to get an entirely different perspective.
I feel like this is what AI has done to the programming discussion. It draws in boring people with boring projects who don't have anything interesting to say about programming.
One of the great drawbacks of AI tools, is they allow anyone to build stuff... even if they have no ideas or knowledge.
It used to be that ShowHN was a filter: in order to show stuff, you had to have done work. And if you did the work, you probably thought about the problem, at the very least the problem was real enough to make solving it worthwhile.
Now there's no such filter function, so projects are built whether or not they're good ideas, by people who don't know very much
It's a bit parallel to that thing we had in 2023 where dinguses went into every thread and proudly announced what ChatGPT had to say about the subject. Consensus eventually become that this was annoying and unhelpful.
Let's see, how to say this less inflamatory..
(just did this) I sit here in a hotel and I wondered if I could do some fancy video processing on the video feed from my laptop to turn it into a wildlife cam to capture the birds who keep flying by.
I ask Codex to whip something up. I iterate a few times, I ask why processing is slow, it suggests a DNN. I tell it to go ahead and add GPU support while its at it.
In a short period of time, I have an app that is processing video, doing all of the detection, applying the correct models, and works.
It's impressive _to me_ but it's not lost on me that all of the hard parts were done by someone else. Someone wrote the video library, someone wrote the easy python video parsers, someone trained and supplied the neural networks, someone did the hard work of writing a CUDA/GPU support library that 'just works'.
I get to slap this all together.
In some ways, that's the essence of software engineering. Building on the infinite layers of abstractions built by others.
In other ways, it doesn't feel earned. It feels hollow in some way and demoing or sharing that code feels equally hollow. "Look at this thing that I had AI copy-paste together!"
And for something that is, as you said, impressive to you, that's fine! But the spirit of Show HN is that there was some friction involved, some learning process that you went through, that resulted in the GitHub link at the top.
I saw this come out because my boss linked it as a faster chart lib. It is ai slop but people loved it. [https://news.ycombinator.com/item?id=46706528]
I knew i could do better so i made a version that is about 15kb and solves a fundamental issue with web gl context limits while being significantly faster.
AI helped do alot of code esp around the compute shaders. However, i had the idea of how to solve the context limits. I also pushed past several perf bottlenecks that were from my fundamental lack of webgpu knowledge and in the process deepened my understanding of it. Pushing the bundle size down also stretched my understanding of js build ecosystems and why web workers still are not more common (special bundler setting for workers breaks often)
Btw my version is on npm/github as chartai. You tell me if that is ai slop. I dont think it is but i could be wrong
In the past, new modders would often contribute to existing mods to get their feet wet and quite often they'd turn into maintainers when the original authors burnt out.
But vibe coders never do this. They basically unilaterally just take existing mods' source code, feed this into their LLM of choice and generate a derivative work. They don't contribute back anything, because they don't even try to understand what they are doing.
Their ideas might be novel, but they don't contribute in any way to the common good in terms of capabilities or infrastructure. It's becoming nigh impossible to police this, and I fear the endgame is a sea of AI generated slop which will inevitably implode once the truly innovative stuff dies and and people who actually do the work stop doing so.
AI agent coding has introduced to writing software a sort of interaction like what brands have been doing to social media.
In which case, I kinda disagree. Substandard work is typically submitted by people who don't "get it" and thus either don't understand the standard for work or don't care about meeting it. Either way, any future submission is highly likely to fail the standard again and waste evaluation time.
Of course, there's typically a long tail of people who submit one work to a collection and don't even bother to stick around long enough to see how the community reacts to that work. But those people, almost definitionally, aren't going to complain about being "gatekept" when the work is rejected.
There is this real disconnect between what the visible level of effort implies you've done, and what you actually have to do.
It's going to be interesting to see how our filters get rewired for this visually-impressive-but-otherwise-slop abundance.
Last year though I purchased the next book in the series and I am 99% sure it was AI generated. None of the characters behaved consistently, there was a ton of random lewd scenes involving characters from books past. There were paragraphs and paragraphs of purple prose describing the scene but not actually saying anything. It was just so unlike every other book in the series. It was like someone just pasted all the previous books into an LLM and pushed the go button.
I was so shocked and disappointing that I paid good money for some AI slop I've stopped following the author entirely. It was a real eye opener for me. I used to enjoy just taking a chance on a new book because the fact that it made it through publishing at least implied some minimum quality standard, but now I'm really picky about what books I pick up because the quality floor is so much lower than in the past.
Honestly: there is SO much media, certainly for entertainment. I may just pretend nothing after 2022 exists.
Let’s be honest, this was always the case. The difference now is that nobody cares about the implementation, as all side projects are assumed to be vibecoded.
So when execution is becoming easier, it’s the ideas that matter more…
It used to be that getting to that point required a lot of effort. So, in producing something large, there were quality indicators, and you could calibrate your expectations based on this.
Nowadays, you can get the large thing done - meanwhile the internal codebase is a mess and held together with AI duct-tape.
In the past, this codebase wouldn't scale, the devs would quit, the project would stall, and most of the time the things written poorly would die off. Not every time, but most of the time -- or at least until someone wrote the thing better/faster/more efficiently.
How can you differentiate between 10 identical products, 9 of which were vibecoded, and 1 of which wasn't. The one which wasn't might actually recover your backups when it fails. The other 9, whoops, never tested that codepath. Customers won't know until the edge cases happen.
It's the app store affect but magnified and applied to everything. Search for a product, find 200 near-identical apps, all somehow "official" -- 90% of which are scams or low-effort trash.
Wait, what? That's a great benefit?
https://www.youtube.com/watch?v=kLdaIxDM-_Y
— Tom Cargill, Bell Labs
Some day I’m going to get a crystal ball for statistics. Getting bored with a project was always a thing— after the first push, I don’t encounter like 80% of my coding side projects until I’m cleaning— but I’ll bet the abandonment rate for side projects has skyrocketed. I think a lot of what we’re seeing are projects that were easy enough to reach MVP before encountering the final 90% of coding time, which AI is a lot less useful for.
My experience is the opposite. It’s so much easier to have an LLM grind the last mile annoyances (e.g. installing and debugging compilation bullshit on a specific raspberry pi + unmaintained 3p library versions.)
I can focus on the parts I love, including writing them all by hand, and push the “this isn’t fun, I’d rather do something else” bits to a minion.
I don't think we need to wait a generation either. This probably was a part of their personality already, but a group of people developers on my job seems to have just given up on thinking hard/thinking through difficult problems, its insane to witness.
I’ve seen variation of this question since first few weeks /months after the release of ChatGPT and I havent seen an answer to this from leading figures in the AI coding space, whats the general answer or point of view on this?
Long-term, this is will do enormous damage to society and our species.
The solution is that you declare war and attack the enemy with a stream of slop training data ("poison"). You inject vast quantities of high-quality poison (inexpensive to generate but expensive to detect) into the intakes of the enemy engine.
LLMs are highly susceptible to poisoning attacks. This is their "Achilles' heel". See: https://www.anthropic.com/research/small-samples-poison
We create poisoned git repos on every hosting platform. Every day we feed two gigabytes of poison to web crawlers via dozens of proxy sites. Our goal is a terabyte per day by the end of this year. We fill the corners of social media with poison snippets.
There is strong, widespread support for this hostile posture toward AI. For example, see: https://www.reddit.com/r/hacking/comments/1r55wvg/poison_fou...
Join us. The war has begun.
Nice. I hope you are generating realistic commits and they truly cannot distinguish poison from food.
The cost of detecting/filtering the poison is many orders of magnitude higher than the cost of generating it.
Thing is I worked manually on both of these a lot before I even touched Claude on them so I basically was able to hit my wishlist items that I don't have time to deal with these days but have the logic figured out already.
> author (pilot?) hasn't generally thought too much about the problem space
I’ve stopped saying that “AI is just a tool” to justify/defend its use precisely because of this loss of thought you highlight. I now believe the appropriate analogy is “AI is delegation”.
So talking to the vibe coder that’s used AI is like talking to a high level manager rather than the engineer for human written code
I have two projects right now on the threshold of "Show HN" that I used AI for but could have completed without AI. I'm never going to say "I did this with AI". For instance there is this HR monitor demo
https://gen5.info/demo/biofeedback/
which needs tuning up for mobile (so I can do an in-person demo to people who work on HRV) but most all being able to run with pre-recorded data so that people who don't have a BTLE HR monitor can see how cool it is.
Another thing I am tuning up for "never saw anything like this" impact is a system of tokens that I give people when I go out as-a-foxographer
https://mastodon.social/@UP8/116086491667959840
I am used to marketing funnels having 5% effectiveness and it blows my mind that at least 75% of the tokens I give out get scanned and that is with the old conventional cards that have the same back side. The number + suit tokens are particularly good as a "self-working demo" because it is easy to talk about them, when somebody flags me down because they noticed my hood I can show them a few cards that are all different and let them choose one or say "Look, you got the 9 of Bees!"
It seems silly, but I know I'm more likely to review an implementation if can learn more about the author's state of mind by their style.
As I may have noted before, humans are the problem.
I don't particularly care if people question that, but the source repo is on GitHub: they can see all the edits that were made along the way. Most LLMs wouldn't deliberately add a million spelling or grammar mistakes to fake a human being... yet.
As for knowing what I'm talking about. Many of my blog posts are about stuff that I just learned, so I have many disclaimers that the reader should take everything with a grain of salt. :-) That said: I put a ridiculous amount of time in these things to make sure it's correct. Knowing that your stuff will be out there for others to criticize if a great motivator to do your homework.
Side note: I’d think installing Anubis over your work would go a long way to signaling that but ymmv.
presumably if this is true, it should be obvious by the quality of your product. If it isnt, then maybe you need to need to rethink the value of your artisanal hand written code.
These days I do see a lot of people choosing software for the money. Notably, many of them are bootcamp graduates and arguably made a pivot later in life, as opposed to other careers (such as medicine) which get chosen early. Nothing wrong with that (for many it has a good ROI), but I don’t think this changed anything about people with technical hobbies.
When you’re young, you tend not to choose the path the rest of your life will take based on income. What your parents want for you is a different matter…
It's taken me about month; currently at ~500 commits. I've been obsessed with this problem for ~6 weeks and have made an enormous amount of progress, but admittedly I'm not an expert in the domain.
Being intentionally vague, because I don't want to tip my hand until it's ready. The problem is related to an existing open source tool in a particular scientific niche which flatly does not work on an important modern platform. My project, an open source repo, brings this important legacy tool to this modern platform and also offers a highly engaging visual demo that is of general interest, even to a layperson not interested in programming or this particular scientific niche.
I genuinely believe I have something valuable to offer to this niche scientific community, but also as a general interest and curiosity to HN for the programming aspects (I put a lot of thought into the architecture) as well as the visual aspects (I put a lot of thought into the design and aesthetics).
Do you have any advice on how to present this work in a compelling way to people who understandably feels as burned out on AI slop as you do?
* some people want to show off a fun project/toy/product that they built because it's a business they're trying to start and they want to get marketing
* some people want to show off a fun project/toy/product that they built because it's involves some cool tech under the hood and they want to talk shop
* some people want to show off a fun project/toy/product that they built because it's a fun thing and they just want some people to have fun
I'm not an anti-AI luddite, but for gods sake talk about (ie. submit) something else!
Having too may subs could get out of hand, but sometimes you end up with so much paperwork generated so fast that it needs its own dedicated whole drawer in your filing cabinet ;)
It's still early and easy to underestimate the number of visitors who would absolutely love to have the main page more covered in absolute pure vibe than it is recently.
I would like to hear opinions as to why the non-human touch is preferred, that could add something that not many are putting into words.
Hopefully it's not a case of the lights being on but nobody's home :(
You'll be inventing a lot of novel cicular apparatus with a pivot and circumferencrial rubber absorbers for transportation and it'll take people serious efforts to convince you it's just a wheel.
I mean it's a real problem, but it's also a solved problem, and also not a problem that comes up a lot unless you're doing the sort of engineering where you're using a CAD tool already.
I don't doubt it's useful, and seems pretty well crafted what little I tried it, but it doesn't really invite much discussion.
Agreed. r/ProgrammingLanguages had to deal with this recently in the same way HN has to; people were submitting these obviously vibecoded languages there that barely did anything, just a deluge of "make me a language that does X", where it doesn't actually do X or embody any of the properties that were prompted.
One thing that was pointed out was "More often than not the author also doesn't engage with the community at all, instead they just share their project across a wide range of subreddits." I think HN is another destination for those kinds of AI slop projects -- I'm sure you could find every banned language posted on that forum posted here.
Their solution was to write a new rule and just ban them outright. Things have been going much better since.
https://www.reddit.com/r/ProgrammingLanguages/comments/1pf9j...
concur, perhaps a dedicated or alternative, itch.io like area named "Slop HN:..."
Raising the quality bar would likely cut down on quantity as a side effect, and that would be a nice solution. One idea that a user proposed is a review queue where experienced HN users would help new Show HN submitters craft their posts to be more interesting and fit HN's conventions more.
Also requiring disclosure of the use of AI in repos and especially (or perhaps specifically discouraging its use) when responding with comments to HN feedback.
I'll take this opportunity to strongly encourage sharing prompts (the newest tier of software source code) as the logical progression of OSS adding additional value to Show HN.
And yes, disclosing the use of AI should be par for the course.
So while I understand that new features on HN are few and far between, a quick validation of "Show HN" posts that says, "I see you are trying to post a Show HN..." with some concise explanation of the guidelines might help. I want to believe that most new users mean well, they just need better explanations.
From their perspective, HN is another place to post and get views on their project, part of a check list for their "launch" or whatever, not everything comes from within the ecosystem.
Some posts their projects then never reply to any of the comments, while for me (and many others I bet) half the reason of posting a Show HN is because I'm looking for participating in discussions about my thing and understanding different perspectives thinking about it too.
> I want to believe that most new users mean well, they just need better explanations.
Yeah, so far the only thing I know of is the "Please read the Show HN rules and tips before posting" blurb on the /show list, and the separate pages. Maybe some interstitial or similar if the title prefix-matches with "Show HN" could display the rules, guidelines and "netiquette" more prominently and get more people to be aware of it.
For example, in one project, PRs have to be submitted to the "next" branch and not the default branch. This is written in the CONTRIBUTING.md file, which is linked in the PR template, with the mention that PRs that don't respect that will be close. Most if not all submitters of low-quality PRs don't do anything once their initial PR is closed.
Pretty bummed about that as I just submitted a show HN I'm pretty happy about (it solves an annoying problem I had for years, which I know many people have) and I was looking forward to talk about it (https://news.ycombinator.com/item?id=47050872)
Most people did not read the post, which was immediately evident from how they posted their application by copy-pasting and editing an application posted by someone else before them.
Few things in life are as reliable and trustworthy as the laziness of others.
This wouldn't necessarily need to be a technical change, just an intuitive agreement that posts containing LLM or vibe coded content are not allowed to lie by using the tag, or will be flagged... Then again it could also be used to elevate their rank above other show HN content to give us humanoids some edge if deemed necessary, or a segregated [NOAI] page.
[edit]
The label might need more thought, although "NOAI" is short and intelligible, it might be seen as a bit ironic to have to add a tag containing "AI" into your title. [HUMAN]?
Feels like effort needs to be the barrier (which unfortunately needs human review), not "AI or not". In lieu of that, 100 karma or account minimum age to post something as Show HN might be a dumb way to do it (to give you enough time to have read other people's so you understand the vibe).
Also its not uncommon for weekend projects to be done in a shprt span with just a "first commit" message dump even pre-AI.
So either we are going to completely avoid automation and create a community council to decide what deserves to be shown to rest of the community or just let best AI models to decide if a project is worth show up on front page?
Or we can do all of the above :)
I suspect automating "code base over time" metric is tricky. Not everyone will be using git or a vcs and somethings dont need a codebase to be shared.
Once some users have extra power to push content to the front-page, it will be abused. There will be attempts to gain that privilege in order to monetize, profit from or abuse it in some other way.
The only option along this path would probably be to keep the list of such users very tightly controlled and each vouched for individually.
Another approach might be to ask random users (above certain karma threshold) rank new submissions. Once in a while stick a showhn post into their front page with up and down arrows, and mark it as a community service. Given HN volume it should be easy to get an average opinion in a matter of minutes.https://news.ycombinator.com/item?id=42392302
Meaning you would have to demonstrate that you had or were willing to contribute to the HN community before just promoting your own stuff.
so, in the past, i've created throwaway HN accounts for sharing things that connect to my real ID.
On the front page, someone made a cool isometric NYC map via vibe coding - another front pager was someone who also claimed to make an ultra fast PDF parser that failed on very common PDFs and gamed the speed metric by (useless) out of order parsing.
Guess which one I installed and spent more time using? These vibe coded projects aren't interesting for their code and not intended to be used by anyone if they're libraries, but the applications made with vibe coding are often very cool.
An easy win is turning off the firehouse of vibe coded GitHub portfolio projects and just ask for a link to a hosted application. Easy.
e.g. [20h/2d/$10] could indicate "I spent 20 human-hours over 2 days and burned $10 worth of tokens" (it's hard to put a single-dimensional number on LLM usage and not everyone keeps track, but dollars seem like a reasonable approximation)
I wonder how will this review system work. Perhaps, a Show HN is hidden by default and visible to only experienced HN users who provide enough positive reviews for it to become visible to everyone else. Although, this does sound like gatekeeping to me and may starve many deserving Show HN before they get enough attention.
- Min. 90 days account existence in order to submit
- Cap on plain/Show/Ask HN posts per week
Most of the spam I see in /new or /ask is from fresh accounts. This approach is simple and awards long-term engagement/users while discouraging fly-by-night spammers.
I've loved some of the vibe coded apps that are hosted somewhere that have made the front page, but a lot of the links to GitHub projects intended to farm stars for throwaway portfolio padding (which often don't work).
Egh. No silver bullet here.
Set a policy of X comments required per submission in the last 30 days (not counting last 24 hours) for all submissions, not just "Show HN:" posts.
Meaning, users would need to post X comments before they could post a submission and by not counting the last 24 hours, someone couldn't join, post X comments and immediately post a submission.
It would limit new submission posts to people who are active in the community so they would be more familiar with the policies and etiquette of HN along with gaining an idea of what interests its members.
One thing I noticed recently while going through several of the Show HN submissions was that a lot of the accounts had been created the same day the submission was made.
My guess is HN has become featured on a large number of "Where do I promote/submit my _____?" lists in blogs, social media, etc. to the point that HN is treated like a public bulletin board more than a place to share things with each other in the community.
I love the Show HN section because so many interesting things get posted there but even I have cut back on checking it lately because there are simply too many things posted to check out.
I hope they do something to improve it.
The clarity and focus this discipline would enforce could have a pleasant side effect of enabling a kind of natural evolution of categorizations, and alternative discovery UIs.
HN has a vouch system. Make a Show HN pool, allow accounts over some karma/age level to vouch them out to the main site. I recently had a naive colleague submit a Show HN a week or so ago that Tom killed... for good reason. I told the guy to ask me for advice before submitting a FOSS project he released and instead he shit out a long LLM comment nobody wants to read.
The HN guidelines IMO need a (long overdue) update to describe where a Show HN submission needs to go and address LLM comments/submissions. I get that YC probably wants to let some of it be a playground since money is sloshing around for it, but enough is enough.
hah that sounds like a Show HN incubator.
More seriously though, I think some sort of curation is unavoidable with such topics. If you get inspired by stack overflow where you have some similar mechanics at work, then I'd say that is not too bad. But of course you risk some people being angry about why their amazing vibe coded app is not being shown. Although the more I think of it, this might be a good thing.
Edit: One more thought just came to my mind. A slight modification to the curation rule, you let everything through, just like now. However, the posts are reviewed and those with enough postive review votes get marked in some shape or form, which allows them to be filtered and/or promoted on the show page.
It was not just a product launch for me. I was, sort-of in a crisis. I had just turned 40 and had dark thoughts about not being young, creative and energetic anymore. The outlook of competing with 20 year old sloptimists in the job market made me really anxious.
Upon seeing people enjoying my little game, even if it's just a few HNers, I found an "I still got it" feeling that pushed me to release on Steam, to good reviews.
It was never about the money, it was about recovering my self confidence. Thank you HN, I will return the favour and be the guy checking the new products you launch. If Show HN is drowning, i will drown with it.
https://news.ycombinator.com/item?id=46137953
Thank you for making it, and don't give up. Passion and vision > vibe coding sloptimists.
I'm sure a happy medium is shutting off links to vibe coded source code, and only letting vibed hosted applications or websites. For us who want to read code, source code that means nothing to anyone is pretty disappointing for a Show HN.
As per the old efficient market jokes: https://news.ycombinator.com/item?id=28029044
One of those comments was genuinely useful feedback from Argentina about localization. That alone made it worth posting. But the post was gone from page 1 in what felt like minutes.
What's interesting is this isn't a weekend vibe-coded project - it involves actual physical production, printing, and shipping. But from the outside it probably looks like "another AI wrapper," which I think is the core problem: the flood of low-effort AI projects has made people reflexively skeptical of anything that mentions generation, even when there's real infrastructure behind it.
- Children's books, at least the well-reviewed ones, are pretty good
- This is AI generated, so I expect the quality to be significantly lower than a children's book. Flipping through the examples, I am not convinced that this will be higher quality than a children's book.
- At 20 euros for a paperback, this is also more expensive than most children's books
- Your value prop, as I take it, is that your product is better because it is a book generated for just one child, but I am not convinced that's a solid value prop. I mean, it is kind of an interesting gimmick, but the book being fully AI generated is a large negative, and the book being uniquely created for my kid is a relatively smaller positive.
Those are definitely the highiest-order bits you need to prove to me in order to get traction. A couple of smaller things you should fix as well:
- As an English speaker, almost all the examples are not in English. You should take a reasonable guess at my language and then show me examples in my language
- It's difficult to get started: "Create your own book" leads to a signup page and I don't want to go through that friction when I am already skeptical
You're right that children's books can be excellent, and for generic topics a well-reviewed book from a skilled author and illustrator will beat what we generate. No argument there.
Where we see real value is in the gaps the publishing industry doesn't serve. Bilingual families who can't find books in Maltese/English or Estonian/German. A child with an insulin pump who wants to see a superhero like them. A kid processing their parents' divorce. A child with two dads, or being adopted, or starting at a new school in a country where they don't speak the language yet. No publisher will print a run of one for these families - but these are exactly the stories that matter most to them.
On the UX points - you're right on both. We should localize the showcase to your language, and the signup wall before trying is too much friction. Working on both.
Give people the ability to submit a “Show HN” one year in advance. Specifically, the user specifies the title and a short summary, then has to wait at least year until they can write the remaining description and submit the post. The user can wait more than a year or not submit at all; the delay (and specifying the title/summary beforehand) is so that only projects that have been worked on for over a year are submit-able.
Alternatively, this can be a special category of “Show HN” instead of replacing the main thing.
It's like books. Old but still relevant books are the best books to read.
This tech industry is changing so fast though. Maybe a year is too much?
I am wary of blogs by celebrity software managers such as DHH, Jeff Atwood, Joel Spolsky, and Paul Graham because they talk as if there was something about their experience in software development and marketing except... there isn't.
The same is true for the slop posts about "How I vibe coded X", "How I deal with my anxiety about Y" and "Should I develop my own agentic workflow to do Z?" These aren't really interesting because there isn't anything I can take away from them -- doomscrolling X you might do better because little aphorisms like "Once your agent starts going in circles and you find yourself arguing it you should start a new conversation" is much more valuable than "evaluations" of agents where you didn't run enough prompts to keep statistics or a log of a very path-dependent experience you had. At least those celebrity managers developed a product that worked and managed to sell it, the average vibe coder thinks it is sufficient that it almost worked.
Before, projects were more often carefully human crafted.
But nowadays we expect such projects to be "vibe coded" in a day. And so, we don't have the motivation to invest mental energy in something that we expect to be crap underneath and probably a nice show off without future.
Even if the result is not the best in the world, I think that what interest us is to see the effort.
> The post quickly disappeared from Show HN's first page, amongst the rest of the vibecoded pulp.
The linked article[0] also talks at length about the impact of AI and vibe-coding on indie craftsmanship's longevity.
[0] - https://johan.hal.se/wrote/2026/02/03/the-sideprocalypse/
I took a look at the project and it was a 100k+ LoC vibe-coded repository. The project itself looked good, but it seemed quite excessive in terms of what it was solving. It made me think, I wonder if this exists because it is explicitly needed, or simply because it is so easy for it to exist?
When I launched a side project a couple years ago, getting to the front page felt like a real achievement requiring weeks of iteration and genuine problem-solving. Now you can vibe-code something in a weekend and post it. The median Show HN quality has dropped, so people naturally vote less aggressively on the category as a whole.
The 37% stuck at 1 point stat is the real story. The solution is not changing HN mechanics. It is people being more selective about what they post - and the community being more willing to say "this is not ready" in the comments rather than just silently scrolling past.
It's fair to give the audience a choice to learn about an AI-created product or not.
If I used LLMs to generate a few functions would I be eligible for it? What constitutes "built this with no/ minimal AI"?
Maybe we should have a separate section for 80%+ vibe coded / agent developed.
As dang posted above, I think it's better to frame the problem as "influx of low quality posts" rather than framing policies having to do explicitly with AI. I'm not sure I even know what "AI" is anymore.
So in future everything’s gonna be “agentic”, (un)fortunately.
Everytime I write about it, I feel like a doomsayer.
Anthropic admits that LLM use makes brain lazy.
So as we forgot remembering phone numbers after Google and mobile phones came, it will be probably with coding/programming.
One is where the human has a complete mental map of the product, and even if they use some code generating tools, they fully take responsibility for the related matters.
And there is another, emerging category, where developers don't have a full mental map as it was created by an LLM, and no one actually understands how it works and what does not.
I believe these are two categories that are currently merged in one Show HN, and if in the first category I can be curious about the decisions people made and the solutions they chose, I don't give a flying fork about what an LLM generated.
If you have a 'fog of war' in your codebase, well, you don't own your software, and there's no need to show it as yours. Same way, if you had used autocomplete, or a typewriter in the time of handwriting, and the thinking is yours, an LLM shouldn't be a problem.
I work with a large number of programmers who don't use AI and don't have an accurate mental map for the codebases they work in...
I don't think AI will make these folks more destructive. If anything, it will improve their contributions because AI will be better at understanding the codebase than them.
Good programmers will use AI like a tool. Bad programmers will use AI in lieu of understanding what's going on. It's a win in both cases.
Are the tokens to write out design documentation and lots of comments too expensive or something? I’m trying to figure out how an LLM will even understand what they wrote when they come back to it, let alone a human.
You have to reify mental maps if you have LLM do significant amounts of coding, there really isn’t any other option here.
"Oh, this library just released a new major version? What a pity, I used to know v n deeply, but v n+1 has this nifty feature that I like"
It happened all the time even as a solo dev. In teams, it's the rule, not the exception.
Vibing is just a different obfuscation here.
When you upgrade a library, you made that decision — you know why, you know what it does for you, and you can evaluate the trade-offs before proceeding (unless you're a react developer).
That's not a fog of war, that's delegation.
When an LLM generates your core logic and you can't explain why it works, that's a fundamentally different situation. You're not delegating — you're outsourcing the understanding, and that makes the result not yours.
The benefit of libraries is it's an abstraction and compartmentalization layer. You don't have to use REST calls to talk to AWS, you can use boto and move s3 files around in your code without cluttering it up.
Yeah, sometimes the abstraction breaks or fails, but generally that's rare unless the library really sucks, or you get a leftpad situation.
Having a mental map of your code doesn't mean you know everything, it means you understand how your code works, what it is responsible for, and how it interacts with or delegates to other things.
Part of being a good software engineer is managing complexity like that.
Case in point: aside from Tabbing furiously, I use the Ask feature to ask vague questions that would take my coworkers time they don't have.
Interestingly at least in Cursor, Intellisense seems to be dumbed down in favour of AI, so when I look at a commit, it typically has double digit percentage of "AI co-authorship", even though most of the time it's the result of using Tab and Intellisense would have given the same suggestion anyway.
This really bothers me, coming here asking for human feedback (basically: strangers spending time on their behalf) then dumping it into the slop generator pretending it is even slightly appreciated. It wouldn't even be that much more work to prompt the LLM to hide its tone (https://news.ycombinator.com/item?id=46393992#46396486) but even that is too much.
How many non-native English speakers are on HN? If it's more than 30%, why should they have to use a whole new language if they can just let an LLM do it in a natural sounding way.
Post both versions
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
Some of it is "I wish things I think are cool got more upvotes". Fare enough, I've seen plenty of things I've found cool not get much attention. That's just the nature of the internet.
The other point is show and share HN stories growing in volume, which makes sense since it's now considerably easier to build things. I don't think that's a bad thing really, although curation makes it more difficult. Now that pure agentic coding has finally arrived IMO, creativity and what to build are significantly more important. They always were but technical ability was often rewarded much more heavily. I guess that sucks for technical people.
HN has a very different personality at weekends versus weekdays. I tend to find most of the stuff I think is cool or interesting gets attention at the weekends, and you'll see slightly more off the wall content and ideas being discussed, whereas the weekdays are notably more "serious business" in tone. Both, I think, have value.
So I wonder if there's maybe a strong element of picking your moment with Show HN posts in order to gain better visibility through the masses of other submissions.
Or maybe - but I think this goes against the culture a bit - Show HN could be its own category at the top. Or we could have particular days of the week/month where, perhaps by convention rather than enforcement, Show HN posts get more attention.
I'm not sure how workable these thoughts are but it's perhaps worth considering ways that Show HN could get a bit more of the spotlight without turning it into something that's endlessly gamed by purveyors of AI slop and other bottom-feeding content.
Chasing clout through these forums is ill advised. I think people should post, sure. But don't read into the response too much. People don't really care. From my experience, even if you get an insanely good response, it's short lived, people think its cool. For me it never resulted in any conversions or continued use. It's cheap to upvote. I found the only way to build interest in your product is organic, 1 on 1 communication, real engagement in user forums, etc.
The difference now is that there is even less correlation between "good readme" and "thoughtful project".
I think that if your goal is to signal credentials/effort in 2026 (which is not everyone's goal), a better approach is to write about your motivations and process rather than the artefact itself - tell a story.
I've launched multiple side projects through Show HN over the years. The ones that got traction weren't better products. They hit the front page during a slow news hour and got enough early upvotes to survive the ranking curve. The ones that flopped were arguably more interesting but landed during a busy cycle. That's not a Show HN problem, that's a single-ranking-pool problem.
What would actually help is a separate ranking pool for Show HN with slower time decay. Let projects sit visible for longer so the community can actually try them before they drop off. pg's original vision was about making things people want. Hard to evaluate that in a 90-minute window.
C'est la vie and que sera. I'm sure the artistic industry is feeling the same. Self expression is the computation of input stimuli, emotional or technical, and transforming that into some output. If an infallible AI can replace all human action, would we still theoretically exist if we're no longer observing our own unique universes?
Maybe if people did Show HN for projects that are useful for something? Or at least fun?
There's a disease on HN related with the latest fad:
- (now) "AI" projects
- (now) X but done with "AI"
- (now) X but vibecoded
- (less now, a lot more in the recent past) X but done in Rust
- (none now, quite a few in a more distant past) X but done with blockchain
If the main quality of the project is one of the above, why would it attract interest?
The thing in show HN has to do something to raise interest. If not even the author/marketer thinks it does something, why would anyone look at it?
Trane (good post): https://news.ycombinator.com/item?id=31980069
Pictures Are For Babies (lame post): https://news.ycombinator.com/item?id=45290805
It's only that you can't claim any of the top shelf prizes by vibe coding
I see no reason to disrespect your work from what you say, but I also see no reason that AI would be much help to you after you had been learning for a year. If you are in the loop, shouldn't this be just about the moment when your growing abilities start to easily outpace the model's fixed abilities?
If you do then not vibe coded.
For me, I have different levels of vibes:
Some testing/prototyping bash scripts 100% vibe coded. I have never actually read the code.
Sometimes early iterations, I am familiar with general architecture, but do not know exact file contents.
Sometimes I have gone through and practically rewrote a component from scratch either because it was too convoluted, did not have the perfect abstraction I wantet/etc.
For me the third category is not vibe coded. The first 2 are tech debt in the making.
Good question, and by the same "token" when does it start?
Maybe if there's no possible way the creator could have written it by hand, perhaps due to almost complete illiteracy to code in any language, or something like that, it would be a reference point for "pure vibe". If the project is impressive, that's still nothing to be ashamed of. Especially if people can see the source code.
All kinds of creative people I see are mostly no dummies and it might be better than nothing for them to honestly rate their own submissions somewhere on the scale from pure vibe to pure manual?
With no stigma regardless, and let the upvotes or downvotes from there give an indication of how accurate the self-assessments are. Voting directly to Show HN could even have a different "currency" [0] to help regulate the fall of Show submissions, where a single upvote could mean something like infinitely more than zero.
I'm not disappointed by a project purely vibed by somebody like a visual artist, storyteller, or business enthusiast who has never written a line of code, as long as it is astoundingly impressive, in the league of the better projects, those I would like to take a look at.
I also see real accomplished coders guide their agents to arrive at things that wouldn't be as nice if they didn't have years of advanced manual ability beforehand.
Plus I think I'm in the vast majority and have no interest in "slop", in a way that aligns with so many kinds of people who are also turned off.
But so far, the best definition we have for slop is "we know it when we see it".
Oh, well that's all I've got, so far :)
[0] slop vs non-slop which is like pass/fail, or even a numerical rating could be on the "ballot".
https://news.ycombinator.com/item?id=47006108
https://news.ycombinator.com/item?id=47026263
I attribute it mostly to my own inability to pitch something that is aimed for many audiences at once and needs more UX polishing and maybe a bit on timing.
It's tough when you're not looking to sell a product but moreso engage in a community without going the twitter/bluesky route (which I'll bregudgingly may start using).
Maybe evals is a problem that people don't have yet because they can just build their custom thing or maybe it needs a "hey, you're building agent skills, here's the mental model" (e.g. https://alexhans.github.io/posts/series/evals/building-agent... ) and once they get to the evals part, we start to interact.
In any case, I still find quite a lot of cool things in SHOW HN but the volume will definitely be a challenge going forward.
These days I guess we don't want a library? I can create an MIT-licensed repo with some charts you can point your AI agent to, if it helps?
The font is Gaegu.
The legend says SHNs are getting worse, but surely if the % of SHN posts with 1 point is going DOWN (as per graph) then it's getting better? Either I am dense or the legends are the wrong way round no?
Show HN: My Project - A description for my vibe coded project [3 weeks]
A lot of the good stuff I see on Show HN are projects that have been worked on for a long time. While I understand that vibe coding is newer trend, I also know that vibe coded projects are less likely to stand the test of time. With this, we don't have to worry about whether a project is AI assisted or not, nor do we ban it. Instead just incentivize longer term projects. If the developer lies about how long they worked on the project, they will get reported and downvoted into oblivion.
I did 3 ShowHN in 2024 (outside of the scope of this analysis), one with 306 points, another with 126 points and the third with... 2. There's always been some kind of unpredictability in ShowHN.
But I think the number one criteria for visibility is intelligibility: the project has to be easy to understand immediately, and if possible, easy to install/verify. IMHO, none of the three projects that the author complains didn't get through the noise qualify on this criteria. #2 and #3 are super elaborate (and overly specific); #1 is the easiest to understand (Neohabit) but the home page is heavy in examples that go in all directions, and the github has a million graphics that seem quite complex.
Simplify and thou shall be heard.
I'm wondering how much of it is portfolio building to keep or find a new job in a post-Ai coding world
Show HN: Clawntown – An Evolving Crustacean Island - https://news.ycombinator.com/item?id=47023255
Something rapid fire, fun, categorized maybe. Just a showcase to show off what you've done.
And the comments should start with the day/month the project was first launched.
Where the vibe coders with their slop cannons aren't present though is in things that require hard won domain knowledge. IE, stuff that requires you to actually create a new idea, off an understanding of actual areas of need.
And that kind of thing probably isn't going to do well on Show HN, because your audience probably isn't on HN.
I was a skeptic last year, and now... not so much. I am having Claude build me a distributed system from scratch. I designed it last week as I was admitting to myself the huge failure of my big "I love to code" project that I failed to get traction on.
It took me a week to even give the design to claude because I was afraid of what it meant. I started it last night, and my jaw is dropped. There is a new skill being grown right now, and it... is something.
It certainly isn't nothing, and I for one am curious to simply see what people are making with vibes alone. It's fascinating... and horrifying.
But, I have learned to silence that part of me that is horrified since the world never cared for what I find beautiful (i.e. terrible languages like JavaScript)...
Vibe coding is not helping either, I guess. Now it is even cheaper to create assets for the distribution channel.
I think same thing happened with product hunt.
You could argue it's dead in the sense of "dead internet theory". Yes, more projects than ever are being submitted, but they were not created by humans. Maybe they are being submitted by humans, for now.
I think that Show HN should be used sparingly. It feels like collective community abuse of it will lead to people filtering them out mentally, if not deliberately. They're very low signal these days.
Not everything has to revolve around HN.
Even before AI got so strong, some of the translations were fairly abnormal in their own way.
>The post body is supposed to be part of the human connection element!
I really think this is the best too :)
Maybe for the non-English speakers, or anyone really, if a project means a lot, have a number of people who are smart in different ways look over the text a number of times and help you edit beforehand.
To make sure it's what you the human want to really say at the time.
That would be the pg way.
It is a comeback from a post that stayed for a few hours in the front page a few years ago. Also, it is a useful, non-AI slop, free product. So when it got none upvotes it made me think how I don’t understand HN community anymore how I used to think I did.
Here is the post for the curious
Show HN: (the return of) Read The Count of Monte Cristo and others in your email
https://news.ycombinator.com/item?id=46854574
I linked one of my projects in a post and it got some really good responses. I did a bit more work on it and posted a Show HN thinking a few people might be interested but it got 0 traction.
I even made it a point to go on the new Show HN and checkout some peoples projects (how can I expect anyone to check mine out if I'm not doing the same) and it is hard to keep up.
I have another app that I've been working on for the past 3 months and whilst I want to do a Show HN to discuss how I built it, the moments I was banging my head on the wall working on a bug etc, I sadly wonder if there's any point.
The market is saturated with superficial solutions that look amazing at a glance but don't work at all in the medium or long term yet it doesn't matter at all; they don't even have an incentive to improve, ever, because the founder cashes out/exits before they need to worry about the stuff under the hood. Customer support is replaced by AI agents so nobody can feel the customer's pain anymore. Then investors find ways to financialize the product so that it doesn't depend on consumers anymore and can just tap into big contracts from big institutions... And yet they still spend big on ads, just to prevent new entrants from entering the market.
It reminds me of my time in crypto; the coins were sold as one thing but all the big well known projects barely had less than half of the features implemented (compared to what was advertised)... And 10 years in, most of those projects cashed out big time and still don't have the features promised. Many shut down completely. Doesn't matter. The whole thing existed and succeeded as a pure shell project.
Horrible industry. Do not participate.
Yet most of the time, if I spend five minutes a day on Show HN, I’ll find something new that I find interesting. I wouldn’t say that Show HN is drowning, but creativity should be on life support. I’m sure that’s somewhat a generative AI problem, but they’re pretty good rubber ducks and so I’m surprised by how acute the issue has gotten so quickly.
If this effect is noticeable on an obscure tech forum, one can only imagine the effect on popular source code forges, the internet at large, and ultimately on people. Who/what is using all this new software? What are the motivations of their authors? Is a human even involved in the creation anymore? The ramifications of all this are mind-boggling.
It feels like the age of creating some cool new software on your own to solve a problem you had, sharing it and finding other people who had the same problem, and eventually building a small community around it is coming to a close. The death of open source, basically.
Having said that, it used to feel part of an exclusive club to have the skills and motivation to put a finished project on HN. For me, posting a Show HN was a huge deal - usually done after years of development - remember that - when development of something worthwhile took years and was written entirely by hand?
I don't mind much though - I love that programming is being democratized and no longer only for the arcane wizards of the back room.
Programming has long been democratized. It’s been decades now where you could learn to program without spending a dollar on a university degree or even a bootcamp.
Programming knowledge has been freely available for a long time to those who wanted to learn.
In the future it will seem very strange that there was a time when people had to write every line of code manually. It will simply be accepted that the computers write computer programs for you, no one will think twice about it.
Somewhere right now there's a complete greehorn vibecoder who's saying "hold my beer" ;)
While they proceed to learn everything they can about the code that the LLM generated for them.
For the next few years, and never come back to drink the rest of the beer :)
The obvious counterpoint is that AO3 is brilliant, which it is: give people a way to ontologize themselves and the result is amazing. Sure, AO3 has some sort of make-integer-go-up system, but it reveals the critical defect in “Show HN”: one pool for all submissions means the few that would before have been pulled out by us lifeguards are more likely to drown, unnoticed, amidst the throngs. HN’s submissions model only scales so far without AO3’s del.icio.us-inherited tagging model. Without it, tool-assisted creative output will increasingly overwhelm the few people willing to slog through an untagged Show HN pool. Certainly I’m one of them; at 20% by weight AI submissions per 12 hours in the new feed alone, heavily weighted in favor of show posts, my own eyes and this post’s graphs confirm that I am right to have stopped reading Show HN. I only have so much time in my day, sorry.
My interest in an HN post, whether in new or show or front page, is directly proportional to how much effort the submitter invested in it. “Clippy, write me a program” is no more interesting than a standard HN generic rabble-rousing link to a GotHub issue or a fifty-page essay about some economics point that could have been concisely conveyed in one. If the submitter has invested zero personal effort into whatever degree of expression of designcraft, wordcraft, and code craft that their submission contains, then they have nothing to Show HN.
In the rare cases when I interact with a show post these days, I’ve found the submissions to be functionally equivalent to an AI prompt: “here’s my idea, here’s my solution, here’s my app” but lacking any of the passion that drives people to overcome obstacles at all. That’s an intended outcome of democratization, and it’s also why craft fairs and Saturday markets exercise editorial judgment over who gets a booth or not. It’s a bad look for the market to be filled with sellers who have a list of AI-generated memes and a button press, whose eyes only shine when you take out your wallet. Sure, some of the buttons might be cool, but that market sucks to visit.
Thus, the decline of Show HN. Not because of democratization of knowledge, but because lowering the minimum effort threshold to create and post something to HN reveals a flaw-at-scale of community-voting editorial model: it only works when the editorial community scales as rapidly as submissions, which it obviously has not been.
Full-text search tried to deprecate centralized editorial effort in favor of language modeling, and turned out to be a disastrous failure after a couple decades due to the inability of a computer to distinguish mediocre (or worse) from competent (or better). HN tried to deprecate centralized editorial effort and it has survived well enough for quite some time, but gestures at Show HN trends graphs it isn’t looking good either. Ironically, Reddit tried to implement centralized moderation on a per-community basis — and that worked extremely well for many years, until Reddit rediscovered why corporations of the 90s worked so hard to deprecate editorial staff, when their editors engaged in collective action against management (something any academic journal publisher is intimately familiar with!).
In that light, HN’s core principle is democratizing editorial review — but now that our high-skill niche is no longer high-skill, the submissions are flooding in and the reviewers are not. Without violating the site’s core precepts of submission egality and editorial democracy, I see no way that HN can reverse the trend shown by OP’s data. The AO3 tagging model isn’t acceptable as it creates unequal distinctions between submissions and site complexity that clashes with long-standing operator hostility towards ontologies. The Reddit and acsdemic journal editorial models aren’t acceptable as it creates unequal distinction between users and editors that clashes with long-standing operator hostility towards exercising editorial authority over the importance of submissions. And HN can’t even limit Show HN submissions to long-standing or often-participating users because that would prevent the exact discoveries of gems in the rough that show used to be known for.
The best idea I’ve got is, like, “to post to Show HN, you must make several thoughtful comments on other Show HN posts”, which puts the burden of editorial review into the mod team’s existing bailiwick and training, but requires some extra backend code that adds anti-spam logic, for example “some of your comments must have been upvoted by users who have no preexisting interactions with your comments and continued participating on the site elsewhere after they upvoted you” to exclude the obvious attack vectors.
I wouldn’t want to be in their shoes. A visionary founded left them a site whose continuing health turn out to hinge upon creating things being difficult, and then they got steamrolled by their own industry’s advancements. Phew. Good luck, HN.
This is still possible. Vibe coders are just not interested in working on a piece of software for years till it's polished. It's a self selection pattern. Like the vast amount of terrible VB6 apps when it came out. Or the state of JS until very recently.
Just saw one go from first commit to HN in 25m