vovavili38 minutes ago
Replacing an 11.6GB Parquet file every 5 minutes strikes me as a bit wasteful. I would probably use Apache Iceberg here.
ai-inquisitor28 minutes ago
It's not doing that. If you look at the repository, it's adding a new commit with tiny parquet files every 5 minutes. This recent one only was a 20.9 KB parquet file: https://huggingface.co/datasets/open-index/hacker-news/commi... and the ones before it were a median of 5 KB: https://huggingface.co/datasets/open-index/hacker-news/tree/...

The bigger concern is how large the git history is going to get on the repository.

btown6 minutes ago
I recall that this became a big problem for the Homebrew project in terms of load on the repo, to the extent that Github asked them not to recommend/default-enable shallow clones for their users: https://github.com/Homebrew/brew/issues/15497#issuecomment-1...

This is likely to be lower traffic, and the history should (?) scale only linearly with new data, so likely not the worst thing. But it's something to be cognizant of when using SCM software in unexpected ways!

vovavili19 minutes ago
This makes more sense. I still wonder if the author isn't just effectively recreating Apache Iceberg manually here.
tomrod18 minutes ago
Are they paying for the repo space, I wonder?
zerocrates27 minutes ago
"The dataset is organized as one Parquet file per calendar month, plus 5-minute live files for today's activity. Every 5 minutes, new items are fetched from the source and committed directly as a single Parquet block. At midnight UTC, the entire current month is refetched from the source as a single authoritative Parquet file, and today's individual 5-minute blocks are removed from the today/ directory."

So it's not really one big file getting replaced all the time. Though a less extreme variation of that is happening day to day.

tomrod17 minutes ago
Parquet is a very efficient storage approach. Data interfaces tend to treat paths as partitions, if logical.
fabmilo35 minutes ago
Was thinking the same thing. probably once a day would be more than enough. if you really want a minute by minute probably a delta file from the previous day should be more than enough.
xnx2 hours ago
The best source for this data used to be Clickhouse (https://play.clickhouse.com/play?user=play#U0VMRUNUIG1heCh0a...), but it hasn't updated since 2025-12-26.
robotswantdata47 minutes ago
Where’s the opt out ?
BowBun0 minutes ago
By posting comments on this site, you are relinquishing your right to that content. It belongs to YC and it is theirs to enforce, not yours. https://www.ycombinator.com/legal/
john_strinlai45 minutes ago
hackernews is very upfront that they do not really care about deletion requests or anything of that sort, so, the opt out is to not use hackernews.
ratg1330 minutes ago
Create a new account every so often, don’t leave any identifying information, occasionally switch up the way you spell words (British/US English), and alternate using different slang words and shorthand.
fdghrtbrt23 minutes ago
And do what I do - paste everything into ChatGPT and have it rephrase it. Not because I need help writing, but because I’d rather not have my writing style used against me.
socksy13 minutes ago
I can't stand this and will actively discriminate against comments I notice in that voice. Even this one has "Not because [..], but because [..]"
tantalor37 minutes ago
The back button
gkbrk2 hours ago
My Hacker News items table in ClickHouse has 47,428,860 items, and it's 5.82 GB compressed and 18.18 GB uncompressed. What makes Parquet compression worse here, when both formats are columnar?
0cf8612b2e1e2 hours ago
Sorting, compression algorithm +level, and data types can all have an impact. I noted elsewhere that a Boolean is getting represented as an integer. That’s one bit vs 1-4 bytes.

There is also flexibility in what you define as the dataset. Skinnier, but more focused tables could be space saving vs a wide table that covers everything -will probably break compressible runs of data.

xnx2 hours ago
Parquet has a few compression option. Not sure which one they are using.
hirako20002 hours ago
Plus isn't the least wasteful format, native duckdb for instance compacts better. That's not just down to the compression algorithm, which as you say got three main options for parquet.
epogrebnyak35 minutes ago
Wonder why median votes count is 0, seems every post is getting at least a few votes - maybe this was not the case in the past
epogrebnyak34 minutes ago
Ahhh I get it the moment I asked, there are usually no votes on comments
maxloh27 minutes ago
Could you also release the source code behind the automatic update system?
imhoguy35 minutes ago
Yay! So much knowledge in just 11GB. Adding to my end of the World hoarding stash!
brtkwr39 minutes ago
This comment should make it into the download in a few mins.
tantalor37 minutes ago
As should this reply
ericfr114 minutes ago
Hello to myself for prosperity
politician13 minutes ago
This is great. I've soured on this site over the past few years due to the heavy partisanship that wasn't as present in the early days (eternal September), but there are still quite a few people whose opinions remain thought-provoking and insightful. I'm going to use this corpus to make a local self-hosted version of HN with the ability to a) show inline article summaries and b) follow those folks.
mlhpdx2 hours ago
Static web content and dynamic data?

> The archive currently spans from 2006-10 to 2026-03-16 23:55 UTC, with 47,358,772 items committed.

That’s more than 5 minutes ago by a day or two. No big deal, but a little bit depressing this is still how we do things in 2026.

voxic1154 minutes ago
That is just the archive part, if you just would finish reading the paragraph you would know that updates since 2026-03-16 23:55 UTC are "are fetched every 5 minutes and committed directly as individual Parquet files through an automated live pipeline, so the dataset stays current with the site itself."

So to get all the data you need to grab the archive and all the 5 minute update files.

archive data is here https://huggingface.co/datasets/open-index/hacker-news/tree/...

update files are here (I know that its called "today" but it actually includes all the update files which span multiple days at this point) https://huggingface.co/datasets/open-index/hacker-news/tree/...

john_strinlai50 minutes ago
>if you just would finish reading the paragraph

probably uncalled for

fatty_patty891 minute ago
not really since original comment completely missed it
xandrius1 hour ago
I don't get what you meant with this comment.
john_strinlai57 minutes ago
the data updates every 5 minutes, but the description on huggingface says the last update was 2 days ago.

they are suggesting that the huggingface description should be automatically updating the date & item count when the data gets updated.

voxic1153 minutes ago
No that is the date at which the bulk archive ends and the 5 minute update files begin, so it should not be updated.
kshacker1 hour ago
Good for demo but every 5 minutes? Why?
Imustaskforhelp1 hour ago
It can have some good use cases I can think of. Personally I really appreciate the 5 minute update.
alstonite1 hour ago
What happened between 2023 and 2024 to cause the usage dropoff?
ghgr1 hour ago
I'd say it's less a usage dropoff and more a reversion to the mean after Covid
tehjoker1 hour ago
That's a possible hypothesis, but there was also a rising trend prior, it wasn't stable.
imhoguy41 minutes ago
Return to office
lyu072821 hour ago
Please upload to https://academictorrents.com/ as well if possible
palmotea2 hours ago
> At midnight UTC, the entire current month is refetched from the source as a single authoritative Parquet file, and today's individual 5-minute blocks are removed from the today/ directory.

Wouldn't that lose deleted/moderated comments?

BoredPositron1 hour ago
I guess that's the point.
Imustaskforhelp1 hour ago
Can't someone create an automatic script which can just copy the files say 5 minutes before midnight UTC?
0cf8612b2e1e2 hours ago
Under the Known Limitations section

  deleted and dead are integers. They are stored as 0/1 rather than booleans.
Is there a technical reason to do this? You have the type right there.
Imustaskforhelp1 hour ago
As someone who had made a project analysing hackernews who had used clickhouse, I really feel like this is a project made for me (especially the updated every 5 minute aspect which could've helped my project back then too!)

Your project actually helps me out a ton in making one of the new project ideas that I had about hackernews that I had put into the back-burner.

I had thought of making a ping website where people can just @Username and a service which can detect it and then send mail to said username if the username has signed up to the service (similar to a service run by someone from HN community which mails you everytime someone responds to your thread directly, but this time in a sort of ping)

[The previous idea came as I tried to ping someone to show them something relevant and thought that wait a minute, something like ping which mails might be interesting and then tried to see if I can use algolia or any service to hook things up but not many/any service made much sense back then sadly so I had the idea in back of my mind but this service sort of solves it by having it being updated every 5 minutes]

Your 5 minute updates really make it possible. I will look what I can do with that in some days but I am seeing some discrepancy in the 5 minute update as last seems to be 16 march in the readme so I would love to know more about if its being updated every 5 minutes because it truly feels phenomenal if true and its exciting to think of some new possibilities unlocked with it.

tonymet1 hour ago
what's the license for HN content?
BowBun1 minute ago
We have LLMs and links to TOS, this is easily answerable by _anyone_ on the internet at this point.

Comments+posts are defined as user generated content, you have no right to its privacy/control in any capacity once you post it - https://www.ycombinator.com/legal/

YC in theory has the right to go after unauthorized 3rd parties scraping this data. YC funds startups and is deeply vested in the AI space. Why on Earth would they do that.

echelon1 hour ago
At this point, you can train on anything without repercussion.

Copyright doesn't seem to matter unless you're an IP cartel or mega cap.

marginalia_nu1 hour ago
Laughs nervously in jurisdiction without fair use doctrine
Onavo3 hours ago
Is is possible to only download a subset? e.g. Show HNs or HN Whoishiring. The Show HNs and HN Whoishiring are very useful for classroom data science i.e. a very useful set of data for students to learn the basic of data cleaning and engineering.
nelsondev2 hours ago
It’s date partitioned, you could download just a date range. It’s also parquet, so you can download just specific columns with the right client
bstsb2 hours ago
what’s the license? “do whatever the fuck you want with the data as long as you don’t get caught”? or does that only work for massive corporations
BoredPositron1 hour ago
The universal license.
GeoAtreides2 hours ago
is the legal page a placeholder, do words have no meaning?

https://www.ycombinator.com/legal/

Mods, enforce your license terms, you're playing fast and loose with the law (GDPR/CPRA)

Retr0id2 hours ago
Which terms are not being enforced? (not disagreeing I just don't feel like reading a large legal document)
GeoAtreides2 hours ago
> By uploading any User Content you hereby grant and will grant Y Combinator and its affiliated companies

The user content is supposed to be licensed only Y Combinator and (bleah) its affiliated companies (which are many, all the startups they fund, for example).

jmalicki1 hour ago
Curious why it should be on HackerNews to enforce restrictions on content they only license from you?

If it's owned by you and only licensed by HN shouldn't you be the one enforcing it?

AndrewKemendo1 hour ago
Seems like they are trying to do that through the stated legal intermediary (YC)
zamadatix1 hour ago
If you carry on the quote two more words:

> ... a nonexclusive

I.e. this section is talking to additional rights to the content you post to ALSO go to YC, not that YC is guaranteeing it (+friends) will be the only one to hold these rights or will enforce who else should hold the rights to your publicly shared content for you.

There's a more intricate conversation to be had with GDPR and public data on forums in general but that's wholly unrelated to what YC's legal page says and still unlikely to end up in an alarming result.

ryandvm1 hour ago
That agreement is largely about "Personal Information", not the posts and comments.

That said, there are "no scraping" and "commercial use restricted" carve-outs for the content on HN. Which honestly is bullshit.

ungruntled2 hours ago
None that I could see:

Your submissions to, and comments you make on, the Hacker News site are not Personal Information and are not "HN Information" as defined in this Privacy Policy.

Other Users: certain actions you take may be visible to other users of the Services.

GeoAtreides2 hours ago
I mean, just because they say the comments are not PI doesn't make it so.
ungruntled2 hours ago
That’s a good point. I’m only referring to the terms they used in the privacy policy.
ryandvm1 hour ago
Eh, fuck that agreement. I'm kind of old school in that I believe if you put it on the internet without an auth-wall, people should be allowed to do whatever they want with it. The AI companies seem to agree.

Then again, I'm not the guy that is going to get sued...

Ylpertnodi1 hour ago
> I believe if you put it on the internet without an auth-wall, people should be allowed to do whatever they want with it.

I agree. It's the owners of the sites that have to follow rules, not us.

kmeisthax1 hour ago
"I'm kind of old school in that I believe if you put grass on the ground without a fence, people should be allowed to do whatever they want with it. The noblemen with a thousand cows seem to agree."

And that, my friends, is how you kill the commons - by ignoring the social context surrounding its maintenance and insisting upon the most punitive ways of avoiding abuse.

petercooper1 hour ago
Context is important, but isn’t HN’s social context, in particular, that the site is entirely public, easily crawled through its API (which apparently has next to no rate limits) and/or Algolial, and has been archived and mirrored in numerous places for years already?
echelon1 hour ago
Signal and information are not grass.

Grass and property require upkeep. Radio waves and electromagnetic radiation do not.

I don't want your dog to piss on my lawn and kill my grass. But what harm does it cause me if you take a picture of my lawn? Or if I take a picture of your dog?

If I spend $100M making a Hollywood movie - pay employees, vendors, taxes - contribute to the economic growth of the country - and then that product gets stolen and given away completely for free without being able to see upside, that's a little bit different.

But my Hacker News comment? It's not money.

I think there are plausible ways to draw lines that protect genuine work, effort, and economics while allowing society and innovation to benefit from the commons.

hsuduebc22 hours ago
How is is he breaking gdpr here?
andrewmcwatters2 hours ago
They already refuse to comply with CPRA, instead electing to replace your username with a random 6(?) character string, prefixed with `_`, if I remember correctly.

I know, because I've been here since maybe 2015 or so, but this account was created in 2019.

So any PII you have mentioned in your comments is permanent on Hacker News.

I would appreciate it if they gave users the ability to remove all of their personal data, but in correspondence and in writing here on Hacker News itself, Dan has suggested that they value the posterity of conversations over the law.

lokimoon1 hour ago
You are the product
waynesonfire38 minutes ago
Your reward is the endorphin hit from writing this comment.