tux34 hours ago
See the public phab ticket: https://phabricator.wikimedia.org/T419143

In short, a Wikimedia Foundation account was doing some sort of test which involved loading a large number of user scripts. They decided to just start loading random user scripts, instead of creating some just for this test.

The user who ran this test is a Staff Security Engineer at WMF, and naturally they decided to do this test under their highly-privileged Wikimedia Foundation staff account, which has permissions to edit the global CSS and JS that runs on every page.

One of those random scripts was a 2 year old malicious script from ruwiki. This script injects itself in the global Javascript on every page, and then in the userscripts of any user that runs into it, so it started spreading and doing damage really fast. This triggered tons of alerts, until the decision was made to turn the Wiki read-only.

Ferret74461 hour ago
This is a pretty egregious failure for a staff security engineer
mcmcmc1 hour ago
Pretty much the definition of a “career limiting event”
modderation31 minutes ago
It's either a a Career Limiting Event, or a Career Learning event.

In the case of a Learning event, you keep your job, and take the time to make the environment more resilient to this kind of issue.

In the case of a Limiting event, you lose your job, and get hired somewhere else for significantly better pay, and make the new environment more resilient to this kind of issue.

Hopefully the Wikimedia foundation is the former.

radicaldreamer49 minutes ago
Nobody is going to know who did this, so probably not career limiting in any major way.
xeromal42 minutes ago
They named him in the support ticket linked here somewhere.

> sbassett

xvector1 hour ago
They'll be fine, recruiters don't look this stuff up and generally background checks only care about illegal shit.
londons_explore4 hours ago
Didn't realise this was some historic evil script and not some active attacker who could change tack at any moment.

That makes the fix pretty easy. Write a regex to detect the evil script, and revert every page to a historic version without the script.

jl61 hour ago
Letting ancient evil code run? Have we learned nothing from A Fire Upon the Deep?!
HoldOnAMinute1 hour ago
"It was really just humans playing with an old library. It should be safe, using their own automation, clean and benign.

This library wasn't a living creature, or even possessed of automation (which here might mean something more, far more, than human)."

varenc58 minutes ago
Link to the Prologue of Fire Upon the Deep: https://www.baen.com/Chapters/-0812515285/A_Fire_Upon_the_De...

It's very short and from one of my favorite books. Increasingly relevant.

edoceo1 hour ago
I've only just heard of it. But, I already knew to not run random scripts under a privileged account. And thank you for the book suggestion - I'm into those kinds of tales.
xeromal47 minutes ago
I love that book
Melatonic45 minutes ago
Or just restore from backup across the board. Assuming they do their backups well this shouldn't be too hard (especially since its currently in Read Only mode which means no new updates)
observationist1 hour ago
Are you sure? Are you $150 million ARR sure? Are you $150 million ARR, you'd really like to keep your job, you're not going to accidentally leave a hole or blow up something else, sure?

I agree, mostly, but I'm also really glad I don't have to put out this fire. Cheering them on from the sidelines, though!

jacquesm3 hours ago
True but it does say something that such a script was able to lie dormant for so long.
outofpaper2 hours ago
Why would anyone test in production???!!!
HoldOnAMinute1 hour ago
There are plenty of ways to safely test in production. For one thing you need to limit the scope of your changes.
ninth_ant1 hour ago
Selecting the wrong environment in your test setup by mistake?

I refuse to believe that someone on the security team intentionally tested random user scripts in production on purpose.

withinboredom17 minutes ago
Once you get big enough… there comes a point where you need to run some code and learn what happens when 100 million people hitting it at once looks like. At that scale, “1 in a million class bugs/race conditions” literally happen every day. You can’t do that on every PR, so you ship it and prepare to roll back if anything even starts to look fishy. Maybe even just roll it out gradually.

At least, that’s how it worked at literally every big company I worked at so far. The only reason to hold it back is during testing/review. Once enough humans look at it, you release and watch metrics like a hawk.

And yeah, many features were released this way, often gated behind feature flags to control roll out. When I refactored our email system that sent over a billion notifications a month, it was nerve wracking. You can’t unsend an email and it would likely be hundreds of millions sent before we noticed a problem at scale.

irishcoffee1 hour ago
> I refuse to believe that someone on the security team intentionally tested random user scripts in production on purpose.

Do I have a bridge to sell you, oh boy

fifilura2 hours ago
I have never heard of this kind of insane behaviour before.
cesarb11 minutes ago
> One of those random scripts was a 2 year old malicious script from ruwiki. This script injects itself in the global Javascript on every page, and then in the userscripts of any user that runs into it, so it started spreading and doing damage really fast.

So, like the Samy worm? (https://en.wikipedia.org/wiki/Samy_%28computer_worm%29)

davidd_10041 hour ago
300 million dollar organization btw
Fokamul1 hour ago
I'm guessing, "1> Hey Claude, your script ran this malicious script!"

"Claude> Yes, you're absolutely right! I'm sorry!"

karel-3d2 hours ago
wait as a wikipedia user you can just put random JS to some settings and it will just... run? privileged?

this is both really cool and really really insane

kemayo2 hours ago
It's a mediawiki feature: there's a set of pages that get treated as JS/CSS and shown for either all users or specifically you. You do need to be an admin to edit the ones that get shown to all users.

https://www.mediawiki.org/wiki/Manual:Interface/JavaScript

hk__22 hours ago
Yes, you can have your own JS/CSS that’s injected in every page. This is pretty useful for widgets, editing tools, or to customize the website’s apparence.
karel-3d2 hours ago
It sounds very dangerous to me but who am I to judge.
Brian_K_White2 hours ago
It's nothing.

For the global ones that need admin permissions to edit, it's no different from all the other code of mediawiki itself like the php.

For the user scripts, it's no worse than the fact that you can run tampermonkey in your browser and have it modify every page from evry site in whatever way your want.

bawolff46 minutes ago
It is kind of risky - you now have an entire, mostly unreviewed, ecosystem of javascript code, that users can experiment with.

However its been really useful to allow power users to customize the interface to their needs. It also is sort of a pressure release for when official devs are too slow for meeting needs. At this point wikipedia has become very dependent on it.

corndoge2 hours ago
That is how Mediawiki works. Everything is a page, including CSS and JS. It is not really different than including JS in a webpage anywhere else.
AlienRobot1 hour ago
On one hand, I was about to get irrationally angry someone was attacking Wikipedia, so I'm a bit relieved

On the other hand,

>a Staff Security Engineer at WMF, and naturally they decided to do this test under their highly-privileged Wikimedia Foundation staff account

seriously?

nhubbard5 hours ago
Wow. This worm is fascinating. It seems to do the following:

- Inject itself into the MediaWiki:Common.js page to persist globally, and into the User:Common.js page to do the same as a fallback

- Uses jQuery to hide UI elements that would reveal the infection

- Vandalizes 20 random articles with a 5000px wide image and another XSS script from basemetrika.ru

- If an admin is infected, it will use the Special:Nuke page to delete 3 random articles from the global namespace, AND use the Special:Random with action=delete to delete another 20 random articles

EDIT! The Special:Nuke is really weird. It gets a default list of articles to nuke from the search field, which could be any group of articles, and rubber-stamps nuking them. It does this three times in a row.

divbzero1 hour ago
There doesn’t seem to be an ulterior motive beyond “Muahaha, see the trouble I can cause!”
batiudrami11 minutes ago
A classical virus, from the good old days. None of this botnet/bitcoin mining in the background nonsense.
256_5 hours ago
As someone on the Wikipediocracy forums pointed out, basemetrika.ru does not exist. I get an NXDomain response trying to resolve it. The plot thickens.
pKropotkin5 hours ago
Yeah, basemetrika.ru is free now. Should we occupy it? ;)
acheong084 hours ago
I registered it about 40 minutes ago, but it seems the DNS has been cached by everyone as a result of the wikipedia hack & not even the NS is propagating. Can't get an SSL certificate .
bjord3 hours ago
nice work
Imustaskforhelp4 hours ago
I had looked into its availability too just out of curiosity itself before reading your comment on a provider, Then I read your comment. Atleast its taken in from the hackernews community and not a malicious actor.

Do keep us updated on the whole situation if any relevant situation can happen from your POV perhaps.

I'd suggest to give the domain to wikipedia team as they might know what could be the best use case of it if possible.

acheong0822 minutes ago
Not quite sure which channels I should reach out via but I've put my email on the page so they can contact me.

Based on timings, it seems that Wikipedia wasn't really at risk from the domain being bought as everything was resolved before NS records could propagate. I got 1 hit from the URL which would've loaded up the script and nothing since.

Freak_NL1 hour ago
This community has no malicious actors? :)
acheong0824 minutes ago
I'm not malicious at least :)

Pretty public with who I am https://duti.dev/

Barbing5 hours ago
Namecheap won’t sell it which is great because it made me pause and wonder whether it's legal for an American to send Russians money for a TLD.
throw-the-towel2 hours ago
Namecheap is Ukrainian, of course they won't sell you a .ru domain.
craftkiller1 hour ago
Is it? Wikipedia says:

> Namecheap is a U.S. based domain name registrar and web hosting service company headquartered in Phoenix, Arizona.

and in 2025 they were purchased by:

> CVC Capital Partners plc is a Jersey-based private equity and investment advisory firm

throw-the-towel11 minutes ago
I remember that in 2022 a sizeable part of their workforce was located in Ukraine. Too lazy to search for proof, sorry!
DaSHacka4 hours ago
Pretty sure it is, however, the reverse is actually illegal (for US citizens to provide professional services to anyone residing in Russia) as of like 2022-ish
amiga3865 hours ago
It means giving money to the Russian government, so no.

If anyone from the Russian government is reading this, get the fuck out of Ukraine. Thank you.

dwedge4 hours ago
Well done, it's finally over
INR186504 hours ago
reg.ru, the most popular registrar, sells .ru domains for $1.65, very little of which goes to the national registry. What is their profit on this domain, a couple of cents?

You have helped to bring peace by approximately zero nanoseconds, while doing absolutely nothing about western countries still buying massive amounts of natural resources from Putin. Tax income on their exports make the primary source of income for the federal budget, which directly funds the military.

Good virtue signaling, though. I'm completely disillusioned with the West, this is nothing new.

avidruntime2 hours ago
I don't think voting with your wallet constitutes virtue signaling, especially at a time when end user boycotting is one of the universally known methods of protest.
janalsncm2 hours ago
I am a pragmatist so maybe I will never understand this line of thinking. But in my mind, there are no perfect options, including doing nothing.

By doing nothing, you are allowing a malicious actor to buy the domain. In fact I am sure they would love for everyone else to be paralyzed by purity tests for a $1 domain.

All things being equal, yeah don’t buy a .ru domain. But they are not equal.

256_5 hours ago
I'm half-tempted to try and claim it myself for fun and profit, but I think I'll leave it for someone else.

What should we put there, anyway?

speedgoose5 hours ago
A JavaScript call to window.alert to pause the JavaScript VM.
Imustaskforhelp3 hours ago
Looks like someone other from the hackernews community has bought the domain https://news.ycombinator.com/item?id=47263323#47265499
gibsonsmog5 hours ago
Go old school and have the script inject the "how did this get here im not good with computers" cat onto random pages
gchamonlive5 hours ago
I'd log requests and echo them back in the page
yreg4 hours ago
The antinuke
bawolff5 hours ago
> Vandalizes 20 random articles with a 5000px wide image and another XSS script from basemetrika.ru

Note while this looks like its trying to trigger an xss, what its doing is ineffective, so basemetrika.ru would never get loaded (even ignoring that the domain doesnt exist)

dheera5 hours ago
Wouldn't be surprised if elaborate worms like this are AI-designed
nhubbard5 hours ago
I wouldn't be surprised either. But the original formatting of the worm makes me think it was human written, or maybe AI assisted, but not 100% AI. It has a lot of unusual stylistic choices that I don't believe an AI would intentionally output.
integralid5 hours ago
I would. AI designed software in general does not include novel ideas. And this is the kind of novel software AI is not great at, because there's not much training data.

Of course it's very possible someone wrote it with AI help. But almost no chance it was designed by AI.

idiotsecant1 hour ago
I mean....elaborate is a stretch.
Kiboneu5 hours ago
> Cleaning this up is going to be an absolute forensic nightmare for the Wikimedia team since the database history itself is the active distribution vector.

Well, worm didn't get root -- so if wikimedia snapshots or made a recent backup, probably not so much of a nightmare? Then the diffs can tell a fairly detailed forensic story, including indicators of motive.

Snapshotting is a very low-overhead operation, so you can make them very frequently and then expire them after some time.

Extropy_4 hours ago
Even if they reset to several days ago and lose, say, thousands of edits, even tens of thousands of minor edits, they're still in a pretty good place. Losing a few days of edits is less-than-ideal but very tolerable for Wikipedia as a whole
tetha4 hours ago
At $work we're hosting business knowledge databases. Interestingly enough, if you need to revert a day or two of edits, you're better off to do it asap, over postponing and mulling over it. Especially if you can keep a dump or an export around.

People usually remember what they changed yesterday and have uploaded files and such still around. It's not great, but quite possible. Maybe you need to pull a few content articles out from the broken state if they ask. No huge deal.

If you decide to roll back after a week or so, editors get really annoyed, because now they are usually forced to backtrack and reconcile the state of the knowledge base, maybe you need a current and a rolled-back system, it may have regulatory implications and it's a huge pain in the neck.

Kiboneu4 hours ago
Nah, you can snapshot every 15 minutes. The snapshot interval depends on the frequency of changes and their capacity, but it's up to them how to allocate these capacities... but it's definitely doable and there are real reasons for doing so. You can collapse deltas between snapshots after some time to make them last longer. I'd be surprised if they don't do that.

As an aside, snapshotting would have prevented a good deal of horror stories shared by people who give AI access to the FS. Well, as long as you don't give it root.......

john_strinlai4 hours ago
>Nah, you can snapshot every 15 minutes.

obviously you can. but, what is the actual snapshot frequency? like, what is the timestamp of the last known good snapshot? that is what matters.

in any case, the comment you are replying to is a hypothetical, which correctly points out that even a day or two of lost edits is fine (not ideal, but fine). your reply doesnt engage with their comment at all.

Kiboneu4 hours ago
> the comment you are replying to is a hypothetical, which correctly points out that even a day or two of lost edits is fine (not ideal, but fine). your reply doesnt engage with their comment at all.

I did engage, by pointing out that it wasn't relevant nor a realistic scenario for a competent sysadmin. (Did you read the OP?) That's a /you/ problem if you rely on infrequent backups, especially for a service with so much flux.

> what is the actual snapshot frequency? like, what is the timestamp of the last known good snapshot?

? Why would I know what their internal operations are?

john_strinlai4 hours ago
>I did engage, by pointing out that it wasn't relevant nor a realistic scenario for a competent sysadmin.

>Why would I know what their internal operations are?

i mean... you must, right? you know that once-a-day snapshots is not relevant to this specific incident. you know that their sysadmins are apparently competent. i just assumed you must have some sort of insider information to be so confident.

Kiboneu4 hours ago
I think you are misreading my comments and made a bad assumption. The reason I'm confident is because this has been my bread and butter for a decade.
john_strinlai4 hours ago
>The reason I'm confident is because this has been my bread and butter for a decade.

my decade of dealing with incompetent sysadmins and broken backups (if they even exist) has given me the opposite of confidence.

but im glad you have had a different experience

Kiboneu3 hours ago
> my decade of dealing with incompetent sysadmins and broken backups (if they even exist) has given me the opposite of confidence.

Oh, I agree that the average bar is low. That's part of the reason I do it all myself.

The heuristic with wikimedia is that they've been running a PHP service that accepts and stores (anonymous) input for 25 years. The longetivity with the risk exposure that they have are indicators that they know what they are doing, and I'm sure they've learned from recovering all sorts of failures over the years.

Look at how quickly it was brought back up in this instance!

So, yeah. I don't think initial hypothetical counterpoint holds water, and that's what I have been pointing out.

jibal2 hours ago
Kudos for very polite responses to trolling.
Kiboneu2 hours ago
I have good faith, though I should get off hn now... :P

I still don't need to assume what the intent is. Troll or no troll, it works. My comments might inspire someone else to try a CoW fs. I'm also really impressed with wikimedia's technical team.

john_strinlai2 hours ago
no one is trolling in this comment chain.

i found kibone's reply to a hypothetical musing as if it was some counterpoint in a debate instead of a simple expansion on their comment to be off putting. we had some comments back and forth and we both came out of it just fine. weird of you to add on this little insult to an otherwise pretty normal exchange.

Kiboneu1 hour ago
FWIW I did not assume that you were trolling, and yes we did come out fine.
sobjornstad4 hours ago
Nowadays I refuse to do any serious work that isn't in source control anywhere besides my NAS that takes copy-on-write snapshots every 15 minutes. It has saved my butt more times than I can count.
Kiboneu4 hours ago
Yeah same here. Earlier I had a sync error that corrupted my .git, somehow. no problem; I go back 15 minutes and copy the working version.

Feels good to pat oneself in the back. Mine is sore, though. My E&O/cyber insurance likes me.

gchamonlive4 hours ago
The problem isn't the granularity of the backup but since the worm silently nukes pages, it's virtually impossible to reconcile the state before the attack and the current state, so you have to just forfeit any changes made since then and ask the contributors to do the leg work of reapplying the correct changes
Kiboneu4 hours ago
Why would nuked pages matter? Snapshots capture everything and are not part of wikimedia software.
gchamonlive2 hours ago
The nuke might be legitimate?
wizzwizz42 hours ago
That's not a lot of state lost. Destructive operations are easier to replay than constructive ones.
gchamonlive1 hour ago
Is Wikimedia overreacting then?
wizzwizz41 hour ago
No: from what I can tell, they're being conservative, which is appropriate here. Once you've pushed the "stop bad things happening" button, there's no need to rush.
bawolff45 minutes ago
Nothing was rolled back in the db sense, i think people just used normal wiki revert tools.

It also never effected wikipedia, just the smaller meta site (used for interproject coordination)

wikiperson265 hours ago
A theory on phab: "Some investigation was made in Russian Wikipedia discord chat, maybe it will be useful.

1. In 2023, vandal attacks was made against two Russian-language alternative wiki projects, Wikireality and Cyclopedia. Here https://wikireality.ru/wiki/РАОрг is an article about organisators of these attacks.

2. In 2024, ruwiki user Ololoshka562 created a page https://ru.wikipedia.org/wiki/user:Ololoshka562/test.js containing script used in these attacks. It was inactive next 1.5 years.

3. Today, sbassett massively loaded other users' scripts into his global.js on meta, maybe for testing global API limits: https://meta.wikimedia.org/wiki/Special:Contributions/SBasse... . In one edit, he loaded Ololoshka's script: https://meta.wikimedia.org/w/index.php?diff=prev&oldid=30167... and run it."

orbital-decay4 hours ago
I remember someone mass-defacing the ruwiki almost exactly a year ago (March 3 2025) with some immature insults towards certain ruwiki admins. If I'm not mistaken it was a similar method.
Lockal1 hour ago
No, I think you are mixing something.

- There are constant deface incidents caused by editing of unprotected / semiprotected templates

- There were incidents of UI mistranslation (because MediaWiki translation is crowdsourced)

- The attack that was applied is well know in Russian community, it is pretty much standard "admin-woodpecker". The standard woodpecker (some people call it neo-woodpecker) renamed all pages with a high speed (I know this since 2007, the name woodpecker appeared many years later); then MediaWiki added throttling for renames; then neo-woodpecker reappeared in different years (usually associated with throttling bypass CVEs). Early admin-woodpeckers were much more destructive (destroyed a dozens of mediawiki websites due to lack of backups). Nuking admin woodpecker it quite a boring one, but I think (I hope) there are some AbuseFilter guardrails configured to prevent complex woodpeckers.

- The attack initiator is 100% a well known user; there are not too many users who applied woodpecker in the first place; not too many "upyachka" fans (which indicates that user edited before 2010 - back then active editors knew each other much better). But it is quite pointless to discuss who exactly the initiator is.

- Wikireality page is hijacked by a small group and does not represent the reality.

varun_ch6 hours ago
Woah this looks like an old school XSS worm https://meta.wikimedia.org/wiki/Special:RecentChanges?hidebo...

I’ve always thought the fact that MediaWiki sometimes lets editors embed JavaScript could be dangerous.

varun_ch6 hours ago
Also, I’m also surprised an XSS attack like hasn’t yet been actually used to harvest credentials like passwords through browser autofill[0].

It seems like the worm code/the replicated code only really attacks stuff on site. But leaking credentials (and obviously people reuse passwords across sites) could be sooo much worse.

[0] https://varun.ch/posts/autofill/

hrmtst938371 hour ago
I think autofill-based credential harvesting is harder than it sounds because browsers and password managers treat saved credentials as a separate trust boundary, and every vendor implements different heuristics. The tricky part is getting autofill to fire without a real user gesture and then exfiltrating values, since many browsers require exact form attributes or a user activation and several managers ignore synthetic events.

If an attacker wanted passwords en masse they could inject fake login forms and try to simulate focus and typing, but that chain is brittle across browsers, easy to detect and far lower yield than stealing session tokens or planting persistent XSS. Defenders should assume autofill will be targeted and raise the bar with HttpOnly cookies, SameSite=strict where practical, multifactor auth, strict Content Security Policy plus Subresource Integrity, and client side detection that reports unexpected DOM mutations.

stephbook5 hours ago
Chrome doesnt actually autofill before you interact. It only displays what it would fill in at the same location visually.
varun_ch5 hours ago
but any interaction is good for Chrome, like dismissing a cookie banner
af785 hours ago
Time to add 2FA...
infinitewars4 hours ago
A comment from my wiki-editor friend:

  "The incident appears to have been a cross-site scripting hack. The origin of rhe malicious scripts was a userpage on the Russian Wikipedia. The script contained Russian language text.

  During the shutdown, users monitoring [https://meta.wikimedia.org/wiki/special:RecentChanges Recent changes page on Meta] could view WMF operators manually reverting what appeared to be a worm propagated in common.js

  Hopefully this means they won't have to do a database rollback, i.e. no lost edits. "
Interesting to note how trivial it is today to fake something as coming "from the Russians".
Lockal53 minutes ago
Why do you think it was faked? It is a well known Russian tech (woodpecker), the earliest version I can find now was created in 2013 (but I personally saw it in 2007), it is a well known Russian damocles sword against misconfigured MediaWiki websites.
greyface-6 hours ago
sunaookami3 hours ago
dang3 hours ago
Thanks - we've added the first 3 links to the toptext. Not sure about the 4th.
nzeid6 hours ago
Wikipediocracy link gives "not authorized".
nubinetwork5 hours ago
works for me
Wikipedianon5 hours ago
This was only a matter of time.

The Wikipedia community takes a cavalier attitude towards security. Any user with "interface administrator" status can change global JavaScript or CSS for all users on a given Wiki with no review. They added mandatory 2FA only a few years ago...

Prior to this, any admin had that ability until it was taken away due to English Wikipedia admins reverting Wikimedia changes to site presentation (Mediaviewer).

But that's not all. Most "power users" and admins install "user scripts", which are unsandboxed JavaScript/CSS gadgets that can completely change the operation of the site. Those user scripts are often maintained by long abandoned user accounts with no 2 factor authentication.

Based on the fact user scripts are globally disabled now I'm guessing this was a vector.

The Wikimedia foundation knows this is a security nightmare. I've certainly complained about this when I was an editor.

But most editors that use the website are not professional developers and view attempts to lock down scripting as a power grab by the Wikimedia Foundation.

256_5 hours ago
Maybe somewhat unrelated, but I'm reminded of the fact that people have deleted the main page on a few occasions: https://en.wikipedia.org/wiki/Wikipedia:Don%27t_delete_the_m...
gucci-on-fleek2 hours ago
> Any user with "interface administrator" status can change global JavaScript or CSS for all users on a given Wiki with no review.

True, but there aren't very many interface administrators. It looks like there are only 137 right now [0], which I agree is probably more than there should be, but that's still a relatively small number compared to the total number of active users. But there are lots of bots/duplicates in that list too, so the real number is likely quite a bit smaller. Plus, most of the users in that list are employed by Wikimedia, which presumably means that they're fairly well vetted.

[0]: https://en.wikipedia.org/w/api.php?action=query&format=json&...

notRobot36 minutes ago
gucci-on-fleek30 minutes ago
Those are the English Wikipedia-only users, but you also need to include the "global" users (which I think were the source of this specific compromise?). Search this page [0] for "editsitejs" to see the lists of global users with this permission.

[0]: https://en.wikipedia.org/wiki/Special:GlobalGroupPermissions

RGamma3 hours ago
Seems like a good time to donate one's resources to fix it. The internet is super hostile these days. If Wikipedia falls... well...
Wikipedianon2 hours ago
It's a political issue. Editors are unwilling or unable to contribute to development of the features they need to edit.

Unfortunately, Wikipedia is run on insecure user scripts created by volunteers that tend to be under the age of 18.

There might be more editors trying to resume boost if editing Wikipedia under your real name didn't invite endless harassment.

tick_tock_tick1 hour ago
Wikipedia doesn't even spend donation of Wikipedia anymore.
logophobia3 hours ago
Sounds more like a political issue this. Can't buy your way out of that.
PsylentKnight3 hours ago
My understanding is that Wikipedia receives more donations than they need, surely they have the resources to fix it themselves?
noosphr3 hours ago
You would first need to realzie it's a problem.
krater233 hours ago
Maybe this is the reason for this worm. Someone is angry because they don't got it in another way...
jibal2 hours ago
The worm is a two year old script from the Russian Wiki that was grabbed randomly for a test by a stupid admin running unsandboxed with full privileges, so no.
_verandaguy3 hours ago

    > Based on the fact user scripts are globally disabled now I'm guessing this was a vector.
Disabled at which level?

Browsers still allow for user scripts via tools like TamperMonkey and GreaseMonkey, and that's not enforceable (and arguably, not even trivially visible) to sites, including Wikipedia.

As I say that out loud, I figure there's a separate ecosystem of Wikipedia-specific user scripts, but arguably the same problem exists.

howenterprisey3 hours ago
Yeah, wikipedia has its own user script system, and that was what was disabled.
Wikipedianon2 hours ago
The sitewide JavaScript/CSS is an editable Wiki page.

You can also upload scripts to be shared and executed by other users.

karel-3d2 hours ago
This is apparently not done browser side but server side.

As in, user can upload whatever they wish and it will be shown to them and ran, as JS, fully privileged and all.

AlienRobot1 hour ago
For reference

>There are currently 15 interface administrators (including two bots).

https://en.wikipedia.org/wiki/Wikipedia:Interface_administra...

CloakHQ1 hour ago
session compromise at this scale is usually less about breaking auth and more about harvesting valid sessions from environments where the browser itself leaks state. most "secure" sessions assume the browser is a neutral transport - but the browser exposes a surprising amount of identity through fingerprint consistency across tabs, timing patterns, and cached state that survives logout. the interesting question here isn't the auth model, it's what the attacker's client looked like at the time of the requests.
tantalor6 hours ago
Nice to see jQuery still getting used :)
pixl975 hours ago
>Cleaning this up

Find the first instance and reset to the backup before then. An hour, a day, a week? Doesn't matter that much in this case.

bbor5 hours ago
It is true that they have a particularly robust, distributed backup system that can/has come in handy, but FWIW the timing matters to them. English Wikipedia receives ~2 edits per second, or 172,800 per day. Many of them are surely minor and/or automated, but still: 1,036,800 lost edits is a lot!
shevy-java5 hours ago
Are they really lost though? I think they should not be lost; they could be stored in a separate database additionally.
derefr4 hours ago
In fact, as long as the malware is just doing deletes, you can just merge the two "timelines" by restoring the snapshot and then replaying all the edits but ignoring the deletes. Lost deletes really aren't much of a problem!
Kiboneu4 hours ago
Filesystem & database snapshots are very cheap to make, you can make them every 15 minutes. You can expire old snapshots (or collapse the deltas between them) depending on the storage requirements.
squeaky-clean3 hours ago
That doesn't really matter though against an attack that takes some time to spread. If the attack was active for let's say, 6 hours, then 43,000 legitimate edits happened in between the last "clean" snapshot and the discovery of the attack. If you just revert to the last clean snapshot you lose those legitimate edits.
lifeisstillgood5 hours ago
I completely understand marking the software that controls drinking water as critical infrastructure- but at some point a state based cyber attack that just wipes wikipedia off the net is deeply damaging to our modern society’s ability to agree on common facts …

Just now thought “if Wikipedia vanished what would it mean … and it’s not on the level of safe drinking water, but it is a level.

GuB-424 hours ago
> if Wikipedia vanished what would it mean …

That someone would need to restore some backups, and in the meantime, use mirrors.

Seriously, not that big of a deal. I don't know how many copies of Wikipedia are lying around but considering that archives are free to download, I guess a lot. And if you count text-only versions of the English Wikipedia without history and talk pages, it is literally everywhere as it is a common dataset for natural language processing tasks. It is likely to be the most resilient piece of data of that scale in existence today.

The only difficulty in the worst case scenario would be rebuilding a new central location and restarting the machinery with trusted admins, editors, etc... Any of the tech giants could probably make a Wikipedia replacement in days, with all data restored, but it won't be Wikipedia.

tempaccount50504 hours ago
What you're suggesting is literally impossible. There are plenty of mirrors and random people that download the thing in its entirety. The entire planet would have to be nuked for that to be possible.
xandrius3 hours ago
Don't worry, I personally have an offline backup of the English on my phone.
__turbobrew__4 hours ago
You can download the entirety of wikipedia and store it in your own offline immutable backup.
mrguyorama3 hours ago
The dump of english wikipedia is 26gb compressed and completely usable with that compressed format plus a small index file.

That's small enough to live on most people's phones. It's small enough to be a single BluRay. Maybe Wikipedia should fund some mass printings.

What you do not get however is any media. No sounds, images, videos, drawings, examples, 3D artifacts, etc etc etc. This is a huge loss on many many many topics.

Aperocky5 hours ago
All persistent data should have backup.

It's not a high bar.

lyu072825 hours ago
There are so many mirrors anyway and trivial to get a local copy? What is much more concerning is government censorship and age verification/digital id laws where what articles you read becomes part of your government record the police sees when they pull you over.
CaptainNegative4 hours ago
> but at some point a state based cyber attack that just wipes wikipedia off the net is deeply damaging to our modern society’s ability to agree on common facts

Haven't we hit that point already with bad faith (and potentially government-run) coordinated editing and voting campaigns, as both Wales and Sanger have been pointing out for a while now?

See, for example,

* Sanger: https://en.wikipedia.org/wiki/User:Larry_Sanger/Nine_Theses

* Wales: https://en.wikipedia.org/wiki/Talk:Gaza_genocide/Archive_22#...

* PirateWires: https://www.piratewires.com/p/how-wikipedia-is-becoming-a-ma...

wizzwizz41 hour ago
> Haven't we hit that point already with bad faith (and potentially government-run) coordinated editing […] campaigns,

Yes, this is a real phenomenon. See, for instance, https://en.wikipedia.org/wiki/Timeline_of_Wikipedia%E2%80%93...: the examples from 2006 are funny, and the article's subject matter just gets sadder and sadder as the chronology goes on.

> and voting campaigns

I'm not sure what you mean by this. Wikipedia is not a democracy.

> as both Wales and Sanger have been pointing out

{{fv}}. Neither of those essays make this point. The closest either gets is Sanger's first thesis, which misunderstands the "support / oppose" mechanism. Ironically, his ninth thesis says to introduce voting, which would create the "voting campaign" vulnerability!

These are both really bad takes, which I struggle to believe are made in good faith, and I'm glad Wikipedians are mostly ignoring them. (I have not read the third link you provided, because Substack.)

streetfighter644 hours ago
If you're using wikipedia to "agree on common facts" I think you might have bigger problems...
hnfong4 hours ago
Not the GP, and I don't believe in the existence of "common facts" in general, but Wikipedia is indeed a good place to figure out what other people might agree as common facts...
streetfighter641 hour ago
Well, I'm not sure either what the term "common facts" is supposed to mean, but wikipedia is not a good place to look for what "other people" think, unless if by "other people" you mean a small set of wikipedia powerusers. Just like traditional newspapers are controlled by a small set of editors who decide what's worth publishing, so is wikipedia.

https://en.wikipedia.org/wiki/Wikipedia:What_Wikipedia_is_no...

CSMastermind3 hours ago
Dwedit4 hours ago
I just checked a wiki, and the "MediaWiki:Common.js" page there was read-only, even for wikisysop users.
bawolff2 hours ago
You need to be a special type of admin, called "interface-admin" to edit it. Normal admin is not enough.
dlcarrier4 hours ago
I've never understood why client-side execution is so heavy in modern web pages. Theoretically, the costs to execute it are marginal, but in practice, if I'm browsing a web page from a battery-powered device, all that compute power draining the battery not only affects how long I can use the device between charges, but is also adding wear to the battery, so I'll have to replace it sooner. Also, a lot of web pages are downright slow, because my phone can only perform 10s of billions of operations per second, which isn't enough to responsively arrange text and images (which are composited by dedicated hardware acceleration) through all of the client-side bloat on many modern web pages. If there was that much bloat on the server side, the web server would run out of resources with even moderate usage.

There's also a lot of client-side authentication, even with financial transactions, e.g. with iOS and Android locally verifying a users password, or worse yet a PIN or biographic information, then sending approval to the server. Granted, authentication of any kind is optional for credit card transactions in the US, so all the rest is security theater, but if it did matter, it would be the worst way to do it.

clcaev4 hours ago
We should be using federated organizational architectures when appropriate.

For Wikipedia, consider a central read-only aggregated mirror that delegates the editorial function to specialized communities. Common, suggested tooling (software and processes) could be maintained centrally but each community might be improved with more independence. This separation of concerns may be a better fit for knowledge collection and archival.

Note: I edited to stress central mirroring of static content with delegation of editorial function to contributing organizations. I'm expressly not endorsing technical "dynamic" federation approaches.

brcmthrowaway4 hours ago
Exactly. Wikipedia should be used on ipfs
devmor5 hours ago
In the early 2010’s I worked for a company whose primary income was subscriptions to site protection services - one of which included cleaning up malware-infected Wordpress installations. I worked on the team that did this job.

This exact type of database-stored executable javascript was one of the most annoying types of infections to clean up.

0xWTF5 hours ago
Ok, so there are tons of mediawiki installations all over the internet. What do these operators do? Set their wikis to read-only mode, hang tight, and wait for a security patch?

Also, does this worm have a name?

bawolff5 hours ago
There is nothing to do, the incident was not caused by a vulnerability in mediawiki.

Basically someone who had permissions to alter site js, accidentally added malicious js. The main solution is to be very careful about giving user accounts permission to edit js.

[There are of course other hardening things that maybe should be done based on lessons learned]

dboreham4 hours ago
There are already tools and techniques to validate served JS is as-intended, and these techniques could be beefed up by adding browser checks. I've been surprised these haven't been widely adopted given the spate of recent JS-poisoning attacks.
streetfighter645 hours ago
Well, admins (or anybody other than the developers / deployment pipeline) having permissions to alter the JS sounds like a significant vulnerability. Maybe it wasn't in the early 2000s, but unencrypted HTTP was also normal then.
bawolff2 hours ago
That's a fair point, but keep in mind normal admin is not sufficient. For local users (the account in question wasn't local) you need to be an "interface admin", of which there are only 15 on english wikipedia.

The account in question had "staff" rights which gave him basically all rights on all wikis.

LaGrange4 hours ago
> Well, admins (or anybody other than the developers / deployment pipeline) having permissions to alter the JS sounds like a significant vulnerability.

It's a common feature of CMS'es and "tag management systems." Its presence is a massive PITA to developers even _besides_ the security, but PMs _love them_, in my experience.

mafriese4 hours ago
I’m not saying that this is related to Wikipedia ditching archive.is but timing in combination with Russian messages is at least…weird.
armchairhacker3 hours ago
The script was uploaded in 2024, and triggered today because of an accident

https://en.wikipedia.org/wiki/Wikipedia:Village_stocks#Scott...

worksonmine4 hours ago
And they probably used mind-control to make the admin run random userscripts on his privileged account as well, the capabilities of russian hackers is scary.

/s

It is just another human acting human again.

shevy-java5 hours ago
This is unfortunate that Wikipedia is under attack. It seems as if there are more malicious actors now than, say, 5 years ago.

This may be unrelated but I also noticed more attacks on e. g. libgen, Anna's archive and what not. I am not at all saying this is similar to Wikipedia as such, mind you, but it really seems as if there are more actors active now who target people's freedom now (e. g. freedom of choice of access to any kind of information; age restriction aka age "verification" taps into this too).

jibal2 hours ago
Wikipedia is not under attack. Some stupid admin running with full privileges unsandboxed ran a test that grabbed and ran random user scripts, and one of them just happened to be this 2 year old malicious script.
sciencejerk4 hours ago
I wonder if any poisoned data made it into LLM training data pipelines?
ibejoeb4 hours ago
Interesting angle. Everyone has already pointed out that there are backups basically everywhere, and from an information standpoint, shaving off a day (or whatever) of edits just to get to a known-good point is effectively zero cost. But I wonder what the cost is of the potentially bad data getting baked into those models, and if anyone really cares enough to scrap it.
garbagecreator5 hours ago
Another reason to make the default disabling JS on all websites, and the website should offer a service without JS, especially those implemented in obsolete garbage tech. If it's not an XSS from a famous website, it will be an exploit from a sketchy website.
j455 hours ago
Too much app logic in the client side (Javascript) has always been an attack vector. The more that can reasonably be server side, the more that can't be seen.
dns_snek5 hours ago
The amount of javascript is really beside the point here. The problem is that privileged users can easily edit the code without strong 2FA, allowing automatic propagation.
shevy-java5 hours ago
How does 2FA prevent this here?
dns_snek4 hours ago
If they required 2FA every time you wanted to modify JS then it couldn't propagate automatically. Just requiring 2FA when you first log in wouldn't help, of course.
j453 hours ago
2FAs also may require a level of KYC that Wikipedia isn't after and advocating for 2FA might indirectly advocate for a lot more things than just 2FA.
dns_snek1 hour ago
KYC? I'm talking about standard 2FA methods like Time-based OTP codes.
j453 hours ago
It's not, application logic exposed on the client side is always an attack vector for figuring out how it works and how attack vectors could be devised.

It's simply a calculated risk.

How much business and application logic you put in your Javascript is critical.

On your second unrelated comment about Wikipedia needing to use 2FA, there's probably a better way to do it and I hope mediawiki can do it.

dns_snek1 hour ago
I don't know what you mean by application logic being exposed client-side. To change the content on the website, nuke articles, and propagate the malicious JS code you need to hijack privileged users' credentials and use them to trigger server-side actions.

It doesn't matter how much functionality the JS was originally responsible for, it could've been as little as updating a clock, validating forms, or just some silly animation. Once that JS executes in your browser it has access to your cookies and local storage, which means it can trigger whichever server-side actions it wants.

My second comment is not unrelated. The root cause of this mess is the fact that JS can be edited by privileged users without an approval process. If every change to the JS code required the user to enter their 2FA code (TOTP, let's say) then there would be no way for the worm to spread whenever users visited a page.

i_think_so5 hours ago
> Hitting MediaWiki:Common.js is the absolute nightmare scenario for MediaWiki deployments because that script gets executed by literally every single visitor

...except for us security wonks who have js turned off by default, don't enable it without good reason, disable it ASAP, and take a dim view of websites that require it.

Not too many years ago this behavior was the domain of Luddites and schizophrenics. Today it has become a useful tool in the toolbox of reasonable self-defense for anybody with UID 0.

Perhaps the WMF should re-evaluate just how specialsnowflake they think their UI is and see if, maybe just maybe, they can get by without js. Just a thought.

bbor5 hours ago
It warms my heart that there's basically a 0% chance that they ever approach this camp's viewpoint based on the Herculean effort it took to switch over to a slightly more modern frontend a few years back. I'm glad you don't think of yourself of a Luddite, but I think you're vastly overstating how open people are to a purely-static web.

Also, FWIW: Wikipedia is "specialsnowflake". If it isn't, that's merely because it was so specialsnowflake that there's now a healthy of ecosystem of sites that copied their features! It's far, far more capable than a simple blog, especially when you get into editing it.

i_think_so3 hours ago
Ok, fair point. I presumed that this crowd would be far more familiar with the capabilities of HTML5 and dynamic pages sans js than most. (Surely more familiar than I, who only dabble in code by comparison.)

No, I'm not suggesting we all go back to purely-static web pages, imagemap gifs and server side navigation. But you're going to have a hard time convincing me that I really truly need to execute code of unknown provenance in my this-app-does-everything-for-me process just to display a few pages of text and 5 jpegs.

And for the record, I've called myself a Technologist for almost 30 years now. If I were a closet Luddite I'd be one of the greatest hypocrites of human history. :-)

TZubiri4 hours ago
There's thousands of copies of the whole wikipedia in sql form though, IIRC it's just like 47GB.
eblume3 hours ago
Correct. Not sure about a sql archive, but the kiwix ZIM archive of the top 1M English articles including (downsized but not minimized) images is 43GiB: https://download.kiwix.org/zim/wikipedia/

And the entire English wikipedia with no images is, interestingly, also 43GiB.

0xWTF5 hours ago
Looking forward to the postmortem...
krater232 hours ago
Just thought about.

Who wins the most from a Wikipedia outage and has questionable moral views? The same who currently struggles to find paying customers for his services.

The large AI companies.

Kiboneu5 hours ago
GOD am I thankful to my old self for disabling js by default. And sticking with it.

edit: lol downvoted with no counterpoint, is it hitting a nerve?

Imustaskforhelp3 hours ago
> edit: lol downvoted with no counterpoint, is it hitting a nerve?

I have upvoted ya fwiw and I don't understand it either why people would try to downvote ya.

I mean, if websites work for you while disabling js and you are fine with it. Then I mean JS is an threat vector somewhat.

Many of us are unable to live our lives without JS. I used to use librewolf and complete and total privacy started feeling a little too uncomfortable

Now I am on zen-browser fwiw which I do think has some improvements over stock firefox in terms of privacy but I can't say this for sure but I mainly use zen because it looks really good and I just love zen.

Kiboneu3 hours ago
> I mean, if websites work for you while disabling js and you are fine with it. Then I mean JS is an threat vector somewhat

It's also been torture, I definitely don't prescribe it. :P Like you say, it's a sanity / utility / security tradeoff. I just happen to be willing to trade off sanity for utility and security.

And yes, unfortunately I have to enable JS for some sites -- the default is to leave it disabled. And of course with cloudflare I have to whitelist it specifically for their domains (well, the non analytics domains). But thankfully wikipedia is light and spiffy without the javascript.

pluralmonad2 hours ago
What is uncomfortable about Librewolf? I thought it was basically FF without telemetry and UBO already baked in?
Imustaskforhelp2 hours ago
I appreciate librewolf but when I used to use it, IIRC its fingerprinting features were too strict for some websites IIRC and you definitely have to tone it down a bit by going into the settings. Canvases don't work and there were some other features too.

That being said, Once again, Librewolf is amazing software. I can see myself using it again but I just find zen easier in the sense of something which I can recommend plus ubO obv

Personally these are more aesthetic changes more than anything. I just really like how zen looks and feels.

The answer is sort of, Just personal preference that's all.

nixass5 hours ago
I can edit it
tantalor6 hours ago
"Закрываем проект" is Russian for "Closing the project"
j455 hours ago
It's reassuring to know Wikipedia has these kinds of security mechanisms in place.
lynx973 hours ago
Time to spend some of this excess money on a bit of security tightening? I hear we're talking about a 9 digit figure.
256_6 hours ago
Here before someone says that it's because MediaWiki is written in PHP.
Dwedit6 hours ago
PHP is the language where "return flase" causes it to return true.

https://danielc7.medium.com/remote-code-execution-gaining-do...

m4tthumphrey6 hours ago
Also the language that runs half of the web.

Also the language that has made me millions over my career with no degree.

Also the language that allows people to be up and running in seconds (with or without AI).

I could go on.

dspillett5 hours ago
> Also the language that has made me millions over my career with no degree.

Well done.

> Also the language that allows people to be up and running in seconds (with or without AI).

People getting up and running without any opportunity to be taught about security concerns (even those as simple as the risks of inadequate input verification), especially considering the infamous inconsistency in PHP's APIs which can lead to significant foot-guns, is both a blessing and a curse… Essentially a pre-cursor to some of the crap that is starting to be published now via vibe-coding with little understanding.

jjice5 hours ago
PHP is a fine language. It started my career. That said, it has a lot of baggage that can let you shoot yourself in the foot. Modern PHP is pretty awesome though.
radium3d5 hours ago
Pretty sure we've seen people coding in essentially every other programming language also shoot themselves in the foot.
Sohcahtoa824 hours ago
Every language has foot-guns of some sort. The difference is how easy it is to accidentally pull the trigger.

PHP makes it easy.

jjice3 hours ago
Yeah of course PHP isn't the only programming language you can write bugs in. I don't think you can make it impossible to shoot yourself in the foot, but PHP gives you more opportunities than some other languages, especially with older PHP standard library functions.

One thing I particularly hate is when functions require calling another function afterwards to get any errors that happened, like `json_decode`. C has that problem too.

Problems don't make it a _bad_ programming language. All languages have problems. PHP just has more than some other languages.

ramon1565 hours ago
The language is not what makes you nor the product. You could've written the same thing in RoR, PHP was just first and it's why it still exists
stackghost5 hours ago
PHP performance is significantly better than Ruby on Rails, which I think plays a part in its continued popularity.
onion2k5 hours ago
Also the language that runs half of the web.

The bottom half.

;)

ChrisMarshallNY5 hours ago
I use it on the backends of my stuff.

Works great, but, like any tool, usage matters.

People who use tools badly, get bad results.

I've always found the "Fishtank Graph" to be relevant: https://w3techs.com/technologies/history_overview/programmin...

mannykannot4 hours ago
People who use tools badly inflict bad results on other people, quite often far more so than they do so on themselves.
ChrisMarshallNY2 hours ago
Yeah. It's funny how companies don't like to hire people that use tools correctly, but insist on creating tools that allow them to hire cheaper, less-qualified people.

PHP works fine, if you're a halfway decent programmer. Same with C++.

cwillu5 hours ago
Try not to take criticisms of tools personally. Phillips head screws are shit for a great many applications, while simultaneously being involved in billions of dollars of economic activity, and being a driver that everyone has available.
theamk5 hours ago
Yep, that's the sad truth - a language popularity often has nothing to do with it's security properties. People will happily keep churning out insecure junk as long as it makes them millions, botnet and data compromises be damned.
radium3d5 hours ago
PHP is insanely great, and very fast. The hate has no clout.
jasonjayr5 hours ago
Perl still runs the other half?
m4tthumphrey3 hours ago
I can't edit nor be bothered to reply to all of the negative responses so I'll put it here.

Pretty much all of you missed the larger point. PHP was what allowed me to not work in retail forever, buy a forever house, never have to worry about losing my job (this may change in the future with AI) or being at risk for redundancy, having chosen to only work for small, "normal" well run profitable businesses.

Unless you're building a hyper scale product, it does the job perfectly. PHP itself is not a security issue; using it poorly is, and any language can be used poorly. PHP is still perfectly suitable for web dev, especially in 2026.

420official5 hours ago
FWIW this was fixed in 2020
dspillett5 hours ago
I've not used PHP in anger in well over a decade, but if the general environment out there is anything like it was back then there are likely a lot of people, mostly on cheap shared hosting arrangements, running PHP versions older than that and for the most part knowing no better.

That isn't the fault of the language of course, but a valid reason for some of the “ick” reaction some get when it is mentioned.

Joel_Mckay3 hours ago
PHP had its issues like every language, but also a minimal memory footprint, XML/SOAP parser, and several SQL database cursor options.

Most modern web languages like nodejs are far worse due to dependency rot, and poor REST design pattern implementations. =3

ale425 hours ago
Except that in a contemporary PHP that doesn't work any more.

  PHP Warning:  Uncaught Error: Undefined constant "flase" in php shell code:1
This means game over, the script stops there.
MagicMoonlight4 hours ago
They have no incentive to improve the site, because they’re a for-profit entity.

Despite the constant screeching for donations, the entire site is owned by a company with shareholders. All the “donations” go to them. They already met their funding needs for the next century a long time ago, this is all profit.

charonn03 hours ago
That's a serious accusation. Can you elaborate? What is the name of the company? Why does the Wikimedia Foundation claim ownership? And if you're referring to the Wikimedia Foundation, then what do you mean by "shareholders"?
Uhhrrr5 hours ago
How do they know? Has this been published in a Reliable Source?
nhubbard5 hours ago
This is the official Wikimedia Foundation status page for the whole of Wikipedia, so it's a reliable primary source.
vova_hn25 hours ago
Actually, usage of primary sources is kinda complicated [0], generally Wikipedia prefers secondary and tertiary sources.

[0] https://en.wikipedia.org/wiki/Wikipedia:No_original_research...

jkaplowitz5 hours ago
Yeah, but the purpose of an encyclopedia like Wikipedia (a tertiary source) is to relatively neutrally summarize the consensus of those who spend the time and effort to analyze and interpret the primary sources (and thus produce secondary sources), or if necessary to cite other tertiary summaries of those.

In a discussion forum like HN, pointing to primary sources is the most reliable input to the other readers' research on/synthesis of their own secondary interpretation of what may be going on. Pointing to other secondary interpretations/analyses is also useful, but not without including the primary source so that others can - with apologies to the phrase currently misused by the US right wing - truly do their own research.

Uhhrrr4 hours ago
If you spend any time on Wikipedia, you'll find that secondary sources from an existing list are always preferred. The mandate from the link in GP (https://en.wikipedia.org/wiki/Wikipedia:No_original_research) extends, or at least is interpreted to mean to extend to, actively punishing editors who attempt to analyze or interpret primary sources.

My original post was a joke about this.

skrtskrt5 hours ago
Long past time to eliminate JavaScript from existence
krisoft57 minutes ago
You will have a long trek to do that. We have a javascript interpreter deployed at the second Sun-Earth Lagrange point.

https://www.theverge.com/2022/8/18/23206110/james-webb-space...

dgxyz40 minutes ago
I live happily in the knowledge that in 20000 years when that eventually drifts off into another system and is picked up by aliens that they will reverse engineer it and wonder why the fuck '5'-'4'=1
dgxyz5 hours ago
This.

Actually fuck the whole dynamic web. Just give us hypertext again and build native apps.

Edit: perhaps I shouldn't say this on an VC driven SaaS wankfest forum...

rainingmonkey4 hours ago
You may be interested in https://geminiprotocol.net/
dgxyz3 hours ago
Yes that's exactly what we should be using. Totally agree.
dlivingston4 hours ago
I mean sure, but that's never going to happen, so complaining about it is just shaking your fist at the sky. The only way it will change is if the economics of the web change. Maybe that is the economics of developer time (it being easier/fast/more resilient and thus cheaper to do native dev), or maybe it is that dynamic scripting leads to such extreme vulnerabilities that ease of deployment/development/consumer usage change the macroeconomics of web deployment enough to shift the scales to local.

But if there's one thing I've learned over the years as a technologist, it's this: the "best technology" is not often the "technology that wins".

Engineering is not done in a vacuum. Indeed, my personal definition of engineering is that it is "constraint-based applied science". Yes, some of those constraints are "VC buxx" wanting to see a return on investment, but even the OSS world has its own set of constraints - often overlapping. Time, labor, existing infrastructure, domain knowledge.

dgxyz3 hours ago
I think it will change.

The entire web is built on geopolitical stability and cooperation. That is no longer certain. We already have supply chains failing (RAM/storage) meaning that we will be hardware constrained for the foreseeable future. That puts the onus on efficiency and web apps are NOT efficient however we deliver them.

People are also now very concerned about data sovereignty whereas they previously were not. If it's not in your hands or on your computer than it is at risk.

The VC / SaaS / cloud industry is about to get hit very very hard via this and regulation. At that point, it's back to native as delivery is not about being tied to a network control point.

I've been around long enough to see the centralisation and decentralisation cycles. We're heading the other way now

dlivingston2 hours ago
I think on a high level we're in agreement then. All of those points you mentioned are constraints.

> "VC / SaaS / cloud industry is about to get hit very very hard via ... regulation"

can you explain?

dgxyz52 minutes ago
Why? Well mostly due to the unpredictable behaviour of the country which seems to have the control points of most infra these days.

How? Well the numerous non-US sovereign technology initiatives are going to be incentivised through regulation with local compliance being the only option going forwards.

As a non-US person I am already speaking to people at other orgs in similar space as ours who are looking at options there.

streetfighter644 hours ago
Imagine if wikipedia was a native app, what this vuln would have caused. I for one prefer using stuff in the browser where at least it's sandboxed. Also, there's nothing stopping you from disabling JS in your browser.
dgxyz3 hours ago
Wikipedia should be straight hypermedia. Simple.