Cloudflare outage on February 20, 2026(blog.cloudflare.com)
128 points bynomaxx1173 hours ago |23 comments
kgeist1 hour ago
It's something we debated in our team: if there's an API that returns data based on filters, what's the better behavior if no filters are provided - return everything or return nothing?

The consensus was that returning everything is rarely what's desired, for two reasons: first, if the system grows, allowing API users to return everything at once can be a problem both for our server (lots of data in RAM when fetching from the DB => OOM, and additional stress on the DB) and for the user (the same problem on their side). Second, it's easy to forget to specify filters, especially in cases like "let's delete something based on some filters."

So the standard practice now is to return nothing if no filters are provided, and we pay attention to it during code reviews. If the user does really want all the data, you can add pagination to your API. With pagination, it's very unlikely for the user to accidentally fetch everything because they must explicitly work with pagination tokens, etc.

Another option, if you don't want pagination, is to have a separate method named accordingly, like ListAllObjects, without any filters.

alemanek48 minutes ago
Returning an empty result in that case may cause a more subtle failure. I would think returning an error would be a bit better as it would clearly communicate that the caller called the API endpoint incorrectly. If it’s HTTP a 400 Bad Request status code would seem appropriate.
MobileVet1 hour ago
I like your thought process around the ‘empty’ case. While the opposite of a filter is no filter, to your point, that is probably not really the desire when it comes to data retrieval. We might have to revisit that ourselves.
CommonGuy3 hours ago
Insufficient mock data in the staging environment? Like no BYOIP prefixes at all? Since even one prefix should have shown that it would be deleted by that subtask...

From all the recent outages, it sounds like Cloudflare is barely tested at all. Maybe they have lots of unit tests etc, but they do not seem to test their whole system... I get that their whole setup is vast, but even testing that subtask manually would have surfaced the bug

zmj34 minutes ago
Testing the "whole system" for a mature enterprise product is quite difficult. The combinatorial explosion of account configurations and feature usage becomes intractable on two levels: engineers can't anticipate every scenario they need their tests to cover (because the product is too big understand the whole of), and even if comprehensive testing was possible - it would be impractical on some combination of time, flakiness, and cost.
dabinat3 hours ago
I think Cloudflare does not sufficiently test lesser-used options. I lurk in the R2 Discord and a lot of users seem to have problems with custom domains.
asciii3 hours ago
It was also merged 15 days prior to production release...however, you're spot on with the empty test. That's a basic scenario that if it returned all...is like oh no.
martinald2 hours ago
Just crazy. Why does a staging environment matter? They should be running some integration tests against eg an in memory database for these kinds of tasks surely?
suhputt43 minutes ago
my guess is the company is rotting from the inside and drowning in tech debt
otar2 hours ago
Reliability was/is CF's label.

It's alarming already. Too many outages in the past months. CF should fix it, or it becomes unacceptable and people will leave the platform.

I really hope they will figure things out.

argestes2 hours ago
I have many things dependent on Cloudflare. That makes me root for Cloudflare and I think I'm not the only one. Instead of finding better options we're getting stuck on an already failing HA solution. I wonder what caused this.
slothsarecool1 hour ago
There are no alternatives, and those alternatives that did exist back in the day, had to shut down due to either going out of business or not being able to keep a paygo model.

Not everybody needs cloudflare, but those that need it and aren't major enterprises, have no other option.

pocksuppet1 hour ago
Lots of people who think they need Cloudflare don't. What are you using it for?
slothsarecool1 hour ago
L7 DDoS protection and global routing + CDN, there is not a single paygo provider that can handle the capacity CF can, especially not at this price range (mitigated attacks distributed from approximately 50-90k ips, adding up to about 300-700k rps).

We tried Stackpath, Imperva (Incapsula back in the day), etc but they were either too expensive or went out of business.

blibble5 minutes ago
> especially not at this price range

pay peanuts, get monkeys

Sanzig1 hour ago
Bunny.net? Doesn't have near the same feature set as Cloudflare, but the essentials are there and you can easily pay as you go with a credit card.
slothsarecool1 hour ago
Their WAF isn't there yet, the moment it can build the expressions you can build with CF (and allows you to have as much visibility into the traffic as CF does), then it might be a solid option, assuming they have the compute/network capacity.
arcatech2 hours ago
Do you not feel concern about you and everybody else deciding to put ALL of their eggs into one basket like this?
ranger_danger21 minutes ago
I would bet money that most people who use CF now are already hosting their endpoints at a single provider. I don't think most people care until it actually becomes enough of a problem.
alansaber2 hours ago
Not sure why everyone is complaining, new MCP features are more important than uptime
blibble3 hours ago
is this blog post LLM generated?

the explanation makes no sense:

> Because the client is passing pending_delete with no value, the result of Query().Get(“pending_delete”) here will be an empty string (“”), so the API server interprets this as a request for all BYOIP prefixes instead of just those prefixes that were supposed to be removed. The system interpreted this as all returned prefixes being queued for deletion.

client:

     resp, err := d.doRequest(ctx, http.MethodGet, `/v1/prefixes?pending_delete`, nil)
server:

    if v := req.URL.Query().Get("pending_delete"); v != "" {
        // ignore other behavior and fetch pending objects from the ip_prefixes_deleted table
        prefixes, err := c.RO().IPPrefixes().FetchPrefixesPendingDeletion(ctx)
        if err != nil {
            api.RenderError(ctx, w, ErrInternalError)
            return
        }

        api.Render(ctx, w, http.StatusOK, renderIPPrefixAPIResponse(prefixes, nil))
        return
    }
even if the client had passed a value it would have still done exactly the same thing, as the value of "v" (or anything from the request) is not used in that block
subscribed1 hour ago
That's weird. They only removed some 6 of our prefixes out of perhaps 40 we have with them, so something seems off in this explanation.
bretthoerner3 hours ago
> even if the client had passed a value it would have still done exactly the same thing, as the value of "v" (or anything from the request) is not used in that block

If they passed in any value, they would have entered the block and returned early with the results of FetchPrefixesPendingDeletion.

From the post:

> this was implemented as part of a regularly running sub-task that checks for BYOIP prefixes that should be removed, and then removes them.

They expected to drop into the block of code above, but since they didn't, they returned all routes.

blibble2 hours ago
okay so the code which returned everything isn't there

actual explanation: the API server by default returns everything. the client attempted to make a request to return "pending_deletes", but as the request was malformed, the API instead went down the default path, which returned everything. then the client deleted everything.

makes sense now

but is that explanation is even worse

because that means the code path was never tested?

jbxntuehineoh2 hours ago
or they tested it, but not with a dataset that contained prefixes not pending deletion
bstsb3 hours ago
doesn't look AI-generated. even if they have made a mistake, it's probably just from the rush of getting a postmortem out prior to root cause analysis
himata41133 hours ago
yep, no mention that re-advertised prefixes would be withdrawn again as well during the entire impact even after they shut it down.
atty3 hours ago
I do not work in the space at all, but it seems like Cloudflare has been having more network disruptions lately than they used to. To anyone who deals with this sort of thing, is that just recency bias?
Icathian3 hours ago
It is not. They went about 5 years without one of these, and had a handful over the last 6 months. They're really going to need to figure out what's going wrong and clean up shop.
NinjaTrance3 hours ago
Engineers have been vibe coding a lot recently...
jsheard3 hours ago
The featured blog post where one of their senior engineering PMs presented an allegedly "production grade" Matrix implementation, in which authentication was stubbed out as a TODO, says it all really. I'm glad a quarter of the internet is in such responsible hands.
gtowey2 hours ago
It's spreading and only going to get worse.

Management thinks AI tools should make everyone 10x as productive, so they're all trying to run lean teams and load up the remaining engineers with all the work. This will end about as well as the great offshoring of the early 2000s.

blibble2 hours ago
there was also a post here where an engineer was parading around a vibe-coded oauth library he'd made as a demonstration of how great LLMs were

at which point the CVEs started to fly in

ranger_danger19 minutes ago
Matrix doesn't actually define how one should do authentication though... every homeserver software is free to implement it however they want.
dana3213 hours ago
Thats a classic claude move, even the new sonnet 4.6 still does this.
bonesss2 hours ago
It’s almost as classic as just short circuiting tests in lightly obfuscated ways.

I could be quite the kernel developer if making the test green was the only criteria.

dakiol2 hours ago
No joke. In my company we "sabotaged" the AI initiative led by the CTO. We used LLMs to deliver features as requested by the CTO, but we introduced a couple of bugs here and there (intentionally). As a result, the quarter ended up with more time allocated to fix bugs and tons of customer claims. The CTO is now undoing his initiative. We all have now some time more to keep our jobs.
samrus2 hours ago
Thats actively malicious. I understand not going out of your way to catch the LLMs' bugs so as to show the folly of the initiative, but actively sabotaging it is legitimately dangerous behavior. Its acting in bad faith. And i say this as someone who would mostly oppose such an initiative myself

I would go so far as to say that you shouldnt be employed in the industry. Malicious actors like you will contribute to an erosion of trust thatll make everything worse

sp00chy2 hours ago
Might be but sometimes you don’t have another choice when employers are enforcing AIs which have no „feeling“ for context of all business processes involved created by human workers in the years before. Those who spent a lot of love and energy for them mostly. And who are now forced to work against an inferior but overpowered workforce.

Don’t stop sabotaging AI efforts.

tovej43 minutes ago
Forcing developers to use unsafe LLM tools is also malicious. This is completely ethical to me. Not commenting on legality. But ethically, this is correct.
hypeatei2 hours ago
That's extremely unethical. You're being paid to do something and you deliberately broke it which not only cost your employer additional time and money, but it also cost your customers time and money. If I were you, I'd probably just quit and find another profession.
renegade-otter2 hours ago
I see someone is not familiar with the joys of the current job market.
logicchains2 hours ago
That's not "sabotaged", that's sabotaged, if you intentionally introduced the bugs. Be very careful admitting something like that publicly unless you're absolutely completely sure nobody could map your HN username to your real identity.
Ylpertnodi2 hours ago
Typo: "shop", should have been with an 'el'.

(: phonetically, because 'l's are hard to read.

dazc3 hours ago
Launching a new service every 5 minutes is obviously stretching their resources.
lysace3 hours ago
It has been roughly speaking five and a half years since the IPO. The original CTO (John Graham-Cumming) left about a year ago.
jacquesm3 hours ago
They coasted on momentum for half a year. I don't even think it says anything negative about the current CTO, but more of what an exception JGC is relative to what is normal. A CTO leaving would never show up the next day in the stats, the position is strategic after all. But you'd expect to see the effect after a while, 6 months is longer than I would have expected, but short enough that cause and effect are undeniable.

Even so, it is a strong reminder not to rely on any one vendor for critical stuff, in case that wasn't clear enough yet.

dazc3 hours ago
I wondered what happened to him?
jgrahamc1 hour ago
I am reading HN.
SoKamil57 minutes ago
What is your opinion on the recent Cloudflare outages?
brcmthrowaway2 hours ago
He's on a yacht somewhere
tedd4u2 hours ago
For real
slophater2 hours ago
been at cf for 7 yrs but thinking of gtfo soon. the ceo is a manchild, new cto is an idiot, rest of leadership was replaced by yes-men, and the push for AI-first is being a disaster. c levels pretend they care about reliability but pressure teams to constantly ship, cto vibe codes terraform changes without warning anyone, and it's overall a bigger and bigger mess

even the blog, that used to be a respected source of technical content, has morphed into a garbage fire of slop and vaporware announcements since jgc left.

sebmellen14 minutes ago
Do you feel that Matthew Prince is still technically active/informed? I've interacted with him in the past and he seemed relatively technically grounded, but that doesn't seem as true these days.
goalieca2 hours ago
I’ve had a lot of problems lately. Basic things are failing and it’s like product isn’t involved at all in the dash. What’s worse? The support.. the chat is the buggiest thing I’ve ever seen.
slophater33 minutes ago
don't worry, if it gets much worse the ceo will just throw all of support under the bus again. it will surely get better.
lysace45 minutes ago
> the ceo is a manchild

Checks out with what we have seen from the outside.

__turbobrew__1 hour ago
You know what they say, shit rolls downhill. I don't personally know the CEO, but the feeling I have got from their public fits on social media doesn't instill confidence.

If I was a CF customer I would be migrating off now.

a24446ff872 hours ago
GSD! GSD!! ship! ship! ship!

**everything breaks**

...

**everything breaks again**

oh fuck! Code Orange! I repeat, Code Orange! we need to rebuild trust(R)(TM)! we've let our customers down!

...

**everything breaks again**

Code Orangier! I repeat, Code Orangier!

slophater38 minutes ago
exactly. recently "if the cto is shipping more than you, you're doing something wrong"

cto can't even articulate a sentence without passing it through an LLM, and instead of doing his job he's posting the stupidest shit to his personal bootlicking chat channel. I cringe every time at the brown-nosers that inhabit that hovel.

no words for what the product org is becoming too. they should take their own advice a bit further and just replace all the leadership with an LLM, it would be cheaper and it's the same shit in practice

slophater2 hours ago
amazing how my comment was flagged in 30 seconds... keep bootlicking
Betelbuddy3 hours ago
Cloudflare Outages are as predictable, as the Sun coming up tomorrow. Its their engineering culture.

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

candiddevmike2 hours ago
Wait till you see the drama around their horrible terraform provider update/rewrite:

https://github.com/cloudflare/terraform-provider-cloudflare/...

NinjaTrance3 hours ago
The irony is that the outage was caused by a change from the "Code Orange: Fail Small initiative".

They definitely failed big this time.

anurag2 hours ago
The one redeeming feature of this failure is staged rollouts. As someone advertising routes through CF, we were quite happy to be spared from the initial 25%.
himata41133 hours ago
This blog post is inaccurate, the prefixes were being revoked over and over - to keep your prefixes advertised you had to have a script that would readd them or else it would be withdrawn again. The way they seemed to word it is really dishonest.
boarush3 hours ago
While neither am I nor the company I work for directly impacted by this outage, I wonder how long can Cloudflare take these hits and keep apologizing for it. Truly appreciate them being transparent about it, but businesses care more about SLAs and uptime than the incident report.
llama0523 hours ago
I’ll take clarity and actual RCAs than Microsoft’s approach of not notifying customers and keeping their status page green until enough people notice.

One thing I do appreciate about cloudflare is their actual use of their status page. That’s not to say these outages are okay. They aren’t. However I’m pretty confident in saying that a lot of providers would have a big paper trail of outages if they were more honest to the same degree or more so than cloudflare. At least from what I’ve noticed, especially this year.

boarush3 hours ago
Azure straight up refuses to show me if there's even an incident even if I can literally not access shit.

But last few months has been quite rough for Cloudflare, and a few outages on their Workers platform that didn't quite make the headlines too. Can't wait for Code Orange to get to production.

jacquesm3 hours ago
Bluntly: they expended that credit a while ago. Those that can will move on. Those that can't have a real problem.

As for your last sentence:

Businesses really do care about the incident reports because they give good insight into whether they can trust the company going forward. Full transparency and a clear path to non-repetition due to process or software changes are called for. You be the judge of whether or not you think that standard has been met.

boarush2 hours ago
I might be looking at it differently, but aren't decisions over a certain provider of service being made by the management. Incident reports don't ever reach there in my experience.
samrus2 hours ago
In my experience, the gist of it does reach management when its an existing vendor. Especially if management is tech literate

Becuase management wants to know why the graphs all went to zero, and the engineers have nothing else to do but relay the incident report.

This builds a perception for management of the vendor, and if the perception is that the vendor doesnt tell them shit or doesnt even seem to know theres an outage, then management can decide to shift vendors

dilyevsky2 hours ago
> Because the client is passing pending_delete with no value, the result of Query().Get(“pending_delete”) here will be an empty string (“”), so the API server interprets this as a request for all BYOIP prefixes instead of just those prefixes that were supposed to be removed.

Lmao, iirc long time ago Google's internal system had the same exact bug (treating empty as "all" in the delete call) that took down all their edges. Surprisingly there was little impact as traffic just routed through the next set of proxies.

jaboostin2 hours ago
Hindsight is 20/20 but why not dry run this change in production and monitor the logs/metrics before enabling it? Seems prudent for any new “delete something in prod” change.
vimda1 hour ago
One has to wonder when the board realises Dane was a bad replacement for JGC. These outages are getting ridiculous
ssiddharth3 hours ago
The eternal tech outage aphorism: It's always DNS, except for when it's BGP.
subscribed58 minutes ago
You could argue BGP is like DNS for IPs :)
user2057381 hour ago
They should have rewritten this code in Rust using these brilliant language models. /jk
djfobbz1 hour ago
I'm honestly amazed that a company CF's size doesn't have a neat little cluster of Mac Minis running OpenClaw and quietly taking care of this for them.
tokyobreakfast2 hours ago
Is this trend of oversharing code snippets and TMI postmortems done purposely to distract their customers from raging over the outage and the next impending fuckup?
samrus2 hours ago
Just seems like transparency. I agree that we should also judge them based on the frequency of these incidents and amwhether they provide a path to non-repeatability, but i wouldnt criticize them for the transparency per se
alansaber2 hours ago
Well I still appreciate a good postmortem even if I have no doubt it'll happen again imminently
bdangubic2 hours ago
and if they didn’t we’d posting about lack of transparency. damned if you do, damned if you don’t
VirusNewbie2 hours ago
If you track large SaaS and Cloud uptime, it seem to correlate pretty highly with compensation for big companies. Is cloudflare getting top talent?
bombcar2 hours ago
Based on IPO date and lockups, I suspect top talent is moving on.
wa0082 hours ago
This transparent report can earn my trust
henning2 hours ago
Sure vibe-coded slop that has not been properly peer reviewed or tested prior to deployment is leading to major outages, but the point is they are producing lots of code. More code is good, that means you are a good programmer. Reading code would just slow things down.
sp00chy2 hours ago
that’s my feeling also. We will get this more and more in future.
NooneAtAll32 hours ago
again?
dryarzeg3 hours ago
DaaS - Downtime as a Service©

Just joking, no offence :)

logicchains1 hour ago
DaaS is good ja