It's interesting to see that copilot has the worst overall. I use copilot completions constantly and rarely notice issues with it. I suspect incidents aren't added until after they resolve.
Do I misunderstand or does your page count today's downtime as minor? I would not count the web UI being mostly unusable as minor. Does this mean GitHub understates how bad incidents are? Pr has your page just not yet been updated to include it?
If you'd have asked me a few years ago if anything could be an existential threat to github's dominance in the tech community I'd have quickly said no.
If they don't get their ops house in order, this will go down as an all-time own goal in our industry.
I'm pretty sure they don't GAF about GH uptime as long as they can keep training models on it (0.5 /s), but Azure is revenue friction so might be a real problem.
I'm sympathetic to ops issues, and particularly sympathetic to ops issues that are caused by brain-dead corporate mandates, but you don't get to be an infrastructure company and have this uptime record.
It's extra galling that they advertise all the new buzzword laden AI pipeline features while the regular website and actions fail constantly. Academically I know that it's not the same people building those as fixing bugs and running infra, but the leadership is just clearly failing to properly steer the ship here.
That's probably partly why things have got increasingly flaky - until they finish there'll be constant background cognitive load and surface area for bugs from the fact everything (especially the data) is half-migrated
You'd think so, and we don't know about today's incident yet, but recent Github incidents have been attributed specifically to Azure, and Azure itself has had a lot of downtime recently that lasts for many hours.
I'm a firm believer that almost nothing except public services needs that kind of uptime...
We've introduced ridiculous amounts of complexity to our infra to achieve this and we've contributed to the increasing costs of both services and development itself (the barrier of entry for current juniors is insane compared to what I've had to deal with in my early 20s).
All kinds of companies lose millions of dollars of revenue per day if not hour if their sites are not stable.... apple, amazon, google, Shopify, uber, etc etc.
Those companies have decided the extra complexity is worth the reliability.
Even if you're operating a tech company that doesn't need to have that kind of uptime, your developers probably need those services to be productive, and you don't want them just sitting there either.
Many teams work exclusively in GitHub (ticketing, boards, workflows, dev builds). People also have entire production build systems on GitHub. There's a lot more than git repo hosting.
Any module that is properly tagged and contains an OSS license gets stored in Google's module cache indefinitely. As long as it was go-get-ed once before, you can pull it again without going to GitHub (or any other VCS host).
Are you kidding? I need my code to pass CI, and get reviewed, so I can move on, otherwise the PRs just keep piling. You might as well say the lights could go out, you can do paperwork.
Lots of teams embraced actions to run their CI/CD, and GitHub reviews as part of their merge process. And copilot. Basically their SOC2 (or whatever) says they have to use GitHub.
Does SOC2 itself require that or just yours? I'm not too familiar with SOC2 but I know ISO 27001 quite well, and there's no PR specific "requirements" to speak of. But it is something that could be included in your secure development policy.
And it's pretty common to write in the policy, because its pretty much a gimme, and lets you avoid writing a whole bunch of other equivalent quality measures in the policy.
Every product vendor, especially those that are even within a shouting distance from security, has a wet dream: to have their product explicitly named in corporate policies.
I think this is being downvoted unfairly. I mean, sure, as a company accepting payment for services, being down for a few hours every few months is notably bad by modern standards.
But the inward-looking point is correct: git itself is a distributed technology, and development using it is distributed and almost always latency-tolerant. To the extent that github's customers have processes that are dependent on services like bug tracking and reporting and CI to keep their teams productive, that's a bug with the customer's processes. It doesn't have to be that way and we as a community can recognize that even if the service provider kinda sucks.
Yeah, I'm literally looking at GitLab's "Migrate from GitHub" page on their docs site right now. If there's a way to import issues and projects I could be sold.
Maybe it's be reasonable to script using the glab and gh clis? I've never tried anything like that, but I regularly use the glab cli and it's pretty comprehensive.
I viscerally dislike github so much at this point. I don't know how how they come back from this. Major opportunity for competitor here to come around and with ai native features like context versioning
Of course they're down while I'm trying to address a "High severity" security bug in Caddy but all I'm getting is a unicorn when loading the report.
(Actually there's 3 I'm currently working, but 2 are patched already, still closing the feedback loop though.)
I have a 2-hour window right now that is toddler free. I'm worried that the outage will delay the feedback loop with the reporter(s) into tomorrow and ultimately delay the patches.
I can't complain though -- GitHub sustains most of my livelihood so I can provide for my family through its Sponsors program, and I'm not a paying customer. (And yet, paying would not prevent the outage.) Overall I'm very grateful for GitHub.
have you considered moving or having at least an alternative? asking as someone using caddy for personal hosting who likes to have their website secure. :)
We can of course host our code elsewhere, the problem is the community is kind of locked-in. It would be very "expensive" to move, and would have to be very worthwhile. So far the math doesn't support that kind of change.
Usually an outage is not a big deal, I can still work locally. Today I just happen to be in a very GH-centric workflow with the security reports and such.
I'm curious how other maintainers maintain productivity during GH outages.
As an alternative, I thought mainly as a secondary repo and ci in case that Github stops being reliable, not only as the current instability, but as an overall provider. I'm from the EU and recently catch myself evaluating every US company I interact with and I'm starting to realize that mine might not be the only risk vector to consider. Wondering how other people think about it.
> have you considered moving or having at least an alternative
Not who you're responding to, but my 2 cents: for a popular open-source project reliant on community contributions there is really no alternative. It's similar to social media - we all know it's trash and noxious, but if you're any kind of public figure you have to be there.
14 incidents in February! It's February 9th! Glad to see the latest great savior phase of the AI industrial complex [1] is going just as well as all the others!
I know you are joking but I'm sure that there is at least one director or VP inside GitHub pushing a new salvation project that must use AI to solve all the problems, when actually the most likely reason is engineers are drawing in tech debt.
Upper management in Microsoft has been bragging about their high percentage of AI generated code lately - and in the meantime we've had several disastrous Windows 11 updates with the potential to brick your machine and a slew of outages at github. I'm sure it might be something else but it's clear part of their current technical approach is utterly broken.
Honestly AI management would probably be better. "You're a competent manager, you're not allowed to break or circumvent workers right laws, you must comply with our CSR and HR policies, provide realistic estimates and deliver stable and reliable products to our customers." Then just watch half the tech sector break down, due to a lack of resources, or watch as profit is just cut in half.
All the cool kids move fast and break things. Why not the same for core infrastructure providers? Let's replace our engineers with markdown files named after them.
I'm happy that they're being transparent about it. There's no good way to take downtime, but at least they don't try to cover it up. We can adjust and they'll make it better. I'm sure a retro is on its way it's been quite the bumpy month.
Copilot is shown as having policy issues in the latest reports. Oh my, the irony. Satya is like "look ma, our stock is dropping...", Gee I wonder why Mr!!
GitHub has had customer visible incidents large enough to warrant status page updates almost every day this year (https://www.githubstatus.com/history).
This should not be normal for any service, even at GitHub's size. There's a joke that your workday usually stops around 4pm, because that's when GitHub Actions goes down every day.
I wish someone inside the house cared to comment why the services barely stay up and what kinds of actions are they planning to do to fix this issue that's been going on years, but has definitely accelerated in the past year or so.
It's 100% because the number of operations happening on Github has likely 100x'd since the introduction of coding agents. They built Github for one kind of scale, and the problem is that they've all of a sudden found themselves with a new kind of scale.
That doesn't normally happen to platforms of this size.
We've migrated to Forgejo over the last couple of weeks. We position ourselves[0] as an alternative to the big cloud providers, so it seemed very silly that a critical piece of our own infrastructure could be taken out by a GitHub or Azure outage.
It has been a pretty smooth process. Although we have done a couple of pieces of custom development:
1) We've created a Firecracker-based runner, which will run CI jobs in Firecracker VMs. This brings the Foregjo Actions running experience much more closely into line with GitHub's environment (VM, rather than container). We hope to contribute this back shortly, but also drop me a message if this is of interest.
2) We're working up a proposal[1] to add environments and variable groups to Forgejo Actions. This is something we expect to need for some upcoming compliance requirements.
I really like Forgejo as a project, and I've found the community to be very welcoming. I'm really hoping to see it grow and flourish :D
They're in the process of moving from "legacy" infra to Azure, so there's a ton of churn happening behind the scenes. That's probably why things keep exploding.
I don't know jack about shit here, but genuinely: why migrate a live production system piecewise? Wouldn't it be far more sane to start building a shadow copy on Azure and let that blow up in isolation while real users keep using the real service on """legacy""" systems that still work?
Because it's significantly harder to isolate problems and you'll end up in this loop
* Deploy everything
* It explodes
* Rollback everything
* Spend two weeks finding problem in one system and then fix it
* Deploy everything
* It explodes
* Rollback everything
* Spend two weeks finding a new problem that was created while you were fixing the last problem
* Repeat ad nauseum
Migrating iteratively gives you a foundation to build upon with each component
If you make it work, migrating piecewise should be less change/risk at each junction than a big jump between here and there of everything at once.
But you need to have pieces that are independent enough to run some here and some there, and ideally pieces that can fail without taking down the whole system.
That’s a safer approach but will cause teams to need to test in two infrastructures (old world and new) til the entire new environment is ready for prime time. They’re hopefully moving fast and definitely breaking things.
1. Stateful systems (databases, message brokers) are hard to switch back-and-forth; you often want to migrate each one as few times as possible.
2. If something goes sideways -- especially performance-wise -- it can be hard to tell the reason if everything changed.
3. It takes a long time (months/years) to complete the migration. By doing it incrementally, you can reap the advantages of the new infra, and avoid maintaining two things.
I think it's more likely the introduction of the ability to say "fix this for me" to your LLM + "lgtm" PR reviews. That or MS doing their usual thing to acquired products.
Definitely. The devil is in the details though since it's so damn hard to quantify the $$$ lost when you have a large opinionated customer base that holds tremendous grudges. Doubly so when it's a subscription service with effectively unlimited lifetime for happy accounts.
Business by spreadsheet is super hard for this reason - if you try to charge the maximum you can before people get angry and leave then you're a tiny outage/issue/controversy/breach from tipping over the wrong side of that line.
Yeah, but who cares about long-term? In the long term we are all dead. CEO only needs to be good for 5-10 max years, pop up stock prices and get applause every where and called as the smartest guy in the world.
I think the last major outage wasn't even two weeks ago. We've got about another 2 weeks to finish our MVP and get it launched and... this really isn't helpful. I'm getting pretty fed up of the unreliability.
Screw GitHub, seriously. This unreliability is not acceptable. If I’m in a position where I can influence what code forge we use in future I will do everything in my power to steer away from GitHub.
Every company I’ve worked in the last 10 years used GH for the internal codebase hosting , PRs and sometimes CI. Discoverability doesn’t really come into picture for those users and you can still fork things from GitHub even if you don’t host your core code infra on it
I can help you restore from backups if you will tell me where you backed it up.
You did back it up, right? Right before you ran me with `--allow-dangerously-skip-permissions` and gave me full access to your databases and S3 buckets?
More like Tay.ai and Zoe.ai AIs still arguing amongst themselves not being able to keep the service online for Microsoft after they replaced their human counterparts.
It probably depends on your scale, but I'd suggest self-hosting a Forgejo instance, if it's within your domain expertise to run a service like that. It's not hard to operate, it will be blazing fast, it provides most of the same capabilities, and you'll be in complete control over the costs and reliability.
A people have replied to you mentioning Codeberg, but that service is intended for Open Source projects, not private commercial work.
I've been using https://radicle.xyz/ + https://radicle-ci.liw.fi/ (in combination with my own ci adapter for nix flakes) for about half a year now for (almost) all my public and private repos and so far I really like it.
i would imagine that's what everyone is doing instead of sitting on their hands. Setup a different remote and have your team push/pull to/from it until Github comes back up. I mean you could probably use ngrok and setup a remote on your laptop in a pinch. You shouldn't be totally blocked except for things like automated deployments or builds tied specifically to github.com
For me it is their history of high-impact easily avoidable security bugs. I have no idea why "send a reset password link to an address from an unauthenticated source" was possible at all.
Nah at a small scale it's totally fine, and IME pretty pain-free after you've got it running. The biggest pain points are A) It's slow, B) between auth, storage, and CI runners, you have a lot of unavoidable configuration to do, and C) it has a lot of different features so the docs are MASSIVE.
Not really. About average in terms of infrastructure maintenance. Have been running our orgs instance for 5 years or so, half that time with premium and half the time with just the open source version, running on kubernetes... ran it in AWS at first, then migrated to our own infrastructure.
At my last job I ran a GitLab instance on a tiny AWS server and ran workers on old desktop PCs in the corner of the office.
It's pretty nice if you don't mind it being some of the heaviest software you've ever seen.
I also tried gitea, but uninstalled it when I encountered nonsense restrictions with the rationale "that's how GitHub does it". It was okay, pretty lightweight, but locking out features purely because "that's what GitHub does" was just utterly unacceptable to me.
One thing that always bothered me about gitea is they wouldn't even dog food for a long time. GitLab has been developing on GitLab since forever, basically.
ad hominem isn't a very convincing argument, and as someone who also enjoys forgejo it doesn't make me feel good to see as the justification for another recommender.
From [1] "Forgejo was created in October 2022 after a for profit company took over the Gitea project."
Forgejo became a hard fork in 2024, with both projects diverging. If you're using it for local hosting I don't personally see much of a difference between them, although that may change as the two projects evolve.
It looks like one of my employees got her whole account deleted or banned without warning during this outage. Hopefully this is resolved as service returns.
pretty clear that companies like microsoft are actually terrible at engineering, their core products were built 30 years ago. any changes now are generally extremely incremental and quickly rolled back with issue. trying to innovate at github shows just how bad they are.
It's not just MSFT, it's all of big tech. They basically run as a cartel, destroy competition through illegal means, engage in regulatory capture, and ensure their fiefdoms reign supreme.
All the more reason why they should be sliced and diced into oblivion.
yeah i have worked at a few FAANG, honestly stunning how entrenched and bad some of the products are. internally, they are completely incapable of making any meaningful product changes, the whole thing will break
It's a general curse of anything that becomes successful at a BigCorp.
The engineers who build the early versions were folks at the top of their field, and compensated accordingly. Those folks have long since moved on, and the whole thing is maintained by a mix of newcomers and whichever old hands didn't manage to promote out, while the PMs shuffle the UX to justify everyones salary...
im not even sure id say they were "top", id more just say its a different type of engineer, that either doesnt get promoted to a big impact role at a place like microsoft, or leaves on their own.
I wonder what's the value of having a dedicated X (formerly Twitter) status account post 2023 when people without account will see a mix of entries from 2018, 2024, and 2020 in no particular order upon opening it.
Is it just there so everyone can quickly share their post announcing they're back?
I made this joke 10 hours ago:
"I wonder if you opened https://github.com/claude in like 1000's of browsers / unique ips would it bring down github since it does seem to try until timeout"
It feels like GitHub's shift to these "AI writes code for you while you sleep!" features will appeal to a less technical crowd who lack awareness of the overall source code hosting and CI ecosystem and, combined with their operational incompetence of late (calling it how I see it), will see their dominance as the default source code solution for folks using it to maintain production software projects fade away.
Hopefully the hobbyists are willing to shell out for tokens as much as they expect.
It's a funny coincidence - I pushed a commit adding a link to an image in the README.md, opened the repo page, clicked on the said image, and got the unicorn page. The site did not load anymore after that.
At its core antitrust cases are about monopolies and how companies use anti-competitive conduct to maintain their monopoly.
Github isn't the only source control software in the market. Unless they're doing something obvious and nefarious, its doubtful the justice department will step in when you can simply choose one of many others like Bitbucket, Sourcetree, Gitlab, SVN, CVS, Fossil, DARCS, or Bazaar.
There's just too much competition in the market right now for the govt to do anything.
Minimal changes have occurred to the concept of “antitrust” since its inception as a form of societal justice against corporations, at least per my understanding.
I doubt policymakers in the early 1900s could have predicted the impact of technology and globalization on the corporate landscape, especially vis a vis “vertical integration”.
Personally, I think vertical integration is a pretty big blind spot in laws and policies that are meant to ensure that consumers are not negatively impacted by anticompetitive corporate practices. Sure, “competition” may exist, but the market activity often shifts meaningfully in a direction that is harmful consumers once the biggest players swallow another piece of the supply chain (or product concept), and not just their competitors.
Not really. It's a network effect, like Facebook. Value scales quadratically with the number of users, because nobody wants to "have to check two apps".
We should buy out monopolies like the Chinese government does. If you corner the market, then you get a little payout and a "You beat capitalism! Play again?" prize. Other companies can still compete but the customers will get a nice state-funded high-quality option forever.
Not sure how having downtime is an anti-competition issue. I'm also not sure how you think you can take things away from people? Do you think someone just gave them GitHub and then take it away? Who are you expecting to take it away? Also, does your system have 100% uptime?
Companies used to be forced to sell parts of their business when antitrust was involved. The issue isn't the downtime, they should never have been allowed to own this in the first place.
There was just a recent case with Google to decide if they would have to sell Chrome. Of course the Judge ruled no. Nowadays you can have a monopoly in 20 adjacent industries and the courts will say it's fine.
You've been banging on about this for a while, I think this is my third time responding to one of your accounts. There is no antitrust issue, how are they messing with other competitors? You never back up your reasoning. How many accounts do you have active since I bet all the downvotes are from you?
I've had two accounts. I changed because I don't like the history (maybe one other person has the same opinion I did?). Anyways it's pretty obvious why this is an issue. Microsoft has a historical issue with being brutal to competition. There is no oversight as to what they do with the private data on GitHub. It's absolutely an antitrust issue. Do you need more reasoning?
Didn't you just privately tell me it was 4 accounts? Maybe that was someone else hating on Windows 95. But you need an active reason not what they did 20 years ago.
The more stable/secure a monopoly is in its position the less incentive it has to deliver high quality services.
If a company can build a monopoly (or oligopoly) in multiple markets, it can then use these monopolies to build stability for them all. For example, Google uses ads on the Google Search homepage to build a browser near-monopoly and uses Chrome to push people to use Google Search homepage. Both markets have to be attacked simultaneously by competitors to have a fighting chance.
The biggest thing tying my team to GitHub right now is that we use Graphite to manage stacked diffs, and as far as I can tell, Graphite doesn't support anything but GitHub. What other tools are people using for stacked-diff workflows (especially code review)?
Gerrit is the other option I'm aware of but it seems like it might require significant work to administer.
List of company-friendly managed-host alternatives? SSO, auditing, user management, billing controls, etc?
I would love to pay Codeberg for managed hosting + support. GitLab is an ugly overcomplicated behemoth... Gitea offers "enterprise" plans but do they have all the needed corporate features? Bitbucket is a joke, never going back to that.
On the plus side, it's git, so developers can at least get back to work without too much hassle as long as they don't need the CI/CD side of things immediately.
The saddest part to me is that their status update page and twitter are both out of date. I get a full 500 on github.com and yet all I see on their status page is an "incident with pull requests" and "copilot policy propagation delays."
So what's the moneyline on all these outages being the result of vibe-coded LLM-as-software-engineer/LLM-as-platform-engineer executive cost cutting mandates?
Anyone have alternatives to recommend? We will be switching after this. Already moved to self-hosted action runners and we are early-stage so switching cost is fairly low.
Issues, CI, and downloads for built binaries aren't part of vanilla Git. CI in particular can be hard if you make a multi-platform project and don't want to have to buy a new mac every few years.
the incident has now expanded to include webhooks, git operations, actions, general page load + API requests, issues, and pull requests. they're effectively down hard.
hopefully its down all day. we need more incidents like this to happen for people to get a glimpse of the future.
Github's two biggest selling points were its feature set (Pull Requests, Actions) and its reliability.
With the latter no longer a thing, and with so many other people building on Github's innovations, I'm starting to seriously consider alternatives. Not something I would have said in the past, but when Github's outages start to seriously affect my ability to do my own work, I can no longer justify continuing to use them.
Github needs to get its shit together. You can draw a pretty clear line between Microsoft deciding it was all in on AI and the decline in Github's service quality. So I would argue that for Github to gets its shit back together, it needs to ditch the AI and focus on high quality engineering.
GitHub is the new Internet Explorer 6. A Microsoft product so dominant in its category that it's going to hold everyone back for years to come.
Just when open source development has to deal with the biggest shift in years and maintainers need a tool that will help them fight the AI slop and maintain the software quality, GitHub not only can't keep up with the new requirements, they struggle to keep their product running reliably.
Paying customers will start moving off to GitLab and other alternatives, but GitHub is so dominant in open source that maintainers won't move anywhere, they'll just keep burning out more than before.
I think this is an indicator of a broader trend where tech companies put less value on quality and stability and more value on shipping new features. It’s basically the enshittification of tech
GitHub has a long history of being extremely unstable. They were down all the time, much like recently, several years ago. They seemed to stabilize quite a bit around the MS acquisition era, and now seem to be returning to their old instability patterns.
They should have just scaled a proper Rails monolith instead of this React, Java whatever mixed mess.
But hey probably Microslop is vibecoding everything to Rust now!
Can we please demand that Github provide mirror APIs to competitors? We're just asking for an extinction-level event. "Oops, our AI deleted the world's open source."
Any public source code hosting service should be able to subscribe to public repo changes. It belongs to the authors, not to Microsoft.
The history of tickets and PRs would be a major loss - but a beauty of git is that if at least one dev has the repo checked out then you can easily rehost the code history.
It would be nice to have some sort of widespread standard for doing issue tracking, reviews, and CI in the repo, synced with the repo to all its clones (and fully from version-managed text-files and scripts) rather than in external, centralized, web tools.
It's really pathetic for however many trillions MSFT is valued.
If we had a government worth anything, they ought to pass a law that other competitors be provided mirror APIs so that the entire world isn't shut off from source code for a day. We're just asking for a world wide disaster.
I get the feeling that most of these GitHub downtimes are during US working hours, since I don't remember being impacted them during work. Only noticed it now as I was looking up a repo on my free time.
Good thing we have LLM agents now. Before this kind of behavior was tolerable. Now it's pretty easy to switch over to using other providers. The threat of "but it will take them a lot of effort to switch to someone else" is getting less and less every day.
That pink "Unicorn!" joke is something that should be reconsidered. When your services are down you're probably causing a lot of people a lot of stress ; I don't think it's the time to be cute and funny about it.
One of Reddit's cutesy error pages (presumably for Internal Server Error is similar) is an illustration that says "You broke reddit". I know it's a joke, but have wondered what effect that might have on a particularly anxiety-prone person who takes it literally and thinks they've done something that's taken the site down and inconvenienced millions of other people. Seems a bit dodgy for a mainstream site to assume all of its users have the dev knowledge to identify a joking accusation.
Even if it is their server name, I completely agree with your point. The image is not appropriate when your multi-billion revenue service is yet again failing to meet even a basic level of reliability, preventing people from doing their jobs and generally causing stress and bad feeling all round.
I am personally totally fine with it but I see your point. Github is a bit too big for often braking with a cutsey error message even if it is a reference to their web server.
https://mrshu.github.io/github-statuses/
If they don't get their ops house in order, this will go down as an all-time own goal in our industry.
Something this week about "oops we need a quality czar": https://news.ycombinator.com/item?id=46903802
Does this mean you are only half-sarcastic/half-joking? Or did I interpret that wrong?
It's extra galling that they advertise all the new buzzword laden AI pipeline features while the regular website and actions fail constantly. Academically I know that it's not the same people building those as fixing bugs and running infra, but the leadership is just clearly failing to properly steer the ship here.
Pages and Packages completed in 2025.
Core platform and databases began in October 2025 and are in progress, with traffic split between the legacy Github data center and Azure.
All kinds of companies lose millions of dollars of revenue per day if not hour if their sites are not stable.... apple, amazon, google, Shopify, uber, etc etc.
Those companies have decided the extra complexity is worth the reliability.
Even if you're operating a tech company that doesn't need to have that kind of uptime, your developers probably need those services to be productive, and you don't want them just sitting there either.
Good news! You can't create new PRs right now anyway, so they won't pile.
I’m guessing they’re regretting it.
Our SOC2 doesn't specify GitHub by name, but it does require we maintain a record of each PR having been reviewed.
I guess in extremis we could email each other patch diffs, and CC the guy responsible for the audit process with the approval...
I have cleaned up more than enough of them.
But the inward-looking point is correct: git itself is a distributed technology, and development using it is distributed and almost always latency-tolerant. To the extent that github's customers have processes that are dependent on services like bug tracking and reporting and CI to keep their teams productive, that's a bug with the customer's processes. It doesn't have to be that way and we as a community can recognize that even if the service provider kinda sucks.
(Actually there's 3 I'm currently working, but 2 are patched already, still closing the feedback loop though.)
I have a 2-hour window right now that is toddler free. I'm worried that the outage will delay the feedback loop with the reporter(s) into tomorrow and ultimately delay the patches.
I can't complain though -- GitHub sustains most of my livelihood so I can provide for my family through its Sponsors program, and I'm not a paying customer. (And yet, paying would not prevent the outage.) Overall I'm very grateful for GitHub.
Usually an outage is not a big deal, I can still work locally. Today I just happen to be in a very GH-centric workflow with the security reports and such.
I'm curious how other maintainers maintain productivity during GH outages.
As an alternative, I thought mainly as a secondary repo and ci in case that Github stops being reliable, not only as the current instability, but as an overall provider. I'm from the EU and recently catch myself evaluating every US company I interact with and I'm starting to realize that mine might not be the only risk vector to consider. Wondering how other people think about it.
Not who you're responding to, but my 2 cents: for a popular open-source project reliant on community contributions there is really no alternative. It's similar to social media - we all know it's trash and noxious, but if you're any kind of public figure you have to be there.
Edit: Nevermind, looks like they migrated to github since the last time I contributed
Edit- oh you probably meant an alternative to GitHub perhaps..
[1] https://www.theverge.com/tech/865689/microsoft-claude-code-a...
GitHub is under Microsoft’s CoreAI division, so that’s a pretty sure bet.
https://www.geekwire.com/2025/github-will-join-microsofts-co...
The inertia is not permanent.
Computers can produce spreadsheets even better and they can warm the air around you even faster.
* writing endless reports and executive summaries
* pretending to know things that they don't
* not complaining if you present their ideas as yours
* sycophancy and fawning behavior towards superiors
https://mrshu.github.io/github-statuses/
Edit: Looks like they've got a status page up now for PRs, separate from the earlier notifications one: https://www.githubstatus.com/incidents/smf24rvl67v9
Edit: Now acknowledging issues across GitHub as a whole, not just PRs.
Investigating - We are investigating reports of impacted performance for some GitHub services. Feb 09, 2026 - 15:54 UTC
But I saw it appear just a few minutes ago, it wasn't there at 16:10 UTC.
Investigating - We are investigating reports of degraded performance for Pull Requests Feb 09, 2026 - 16:19 UTC
This should not be normal for any service, even at GitHub's size. There's a joke that your workday usually stops around 4pm, because that's when GitHub Actions goes down every day.
I wish someone inside the house cared to comment why the services barely stay up and what kinds of actions are they planning to do to fix this issue that's been going on years, but has definitely accelerated in the past year or so.
That doesn't normally happen to platforms of this size.
There are probably tons of baked in URLs or platform assumptions that are very easy to break during their core migration to Azure.
It has been a pretty smooth process. Although we have done a couple of pieces of custom development:
1) We've created a Firecracker-based runner, which will run CI jobs in Firecracker VMs. This brings the Foregjo Actions running experience much more closely into line with GitHub's environment (VM, rather than container). We hope to contribute this back shortly, but also drop me a message if this is of interest.
2) We're working up a proposal[1] to add environments and variable groups to Forgejo Actions. This is something we expect to need for some upcoming compliance requirements.
I really like Forgejo as a project, and I've found the community to be very welcoming. I'm really hoping to see it grow and flourish :D
[0]: https://lithus.eu, adam@
[1]: https://codeberg.org/forgejo/discussions/issues/440
PS. We are also looking at offering this as a managed service to our clients.
* Deploy everything * It explodes * Rollback everything * Spend two weeks finding problem in one system and then fix it * Deploy everything * It explodes * Rollback everything * Spend two weeks finding a new problem that was created while you were fixing the last problem * Repeat ad nauseum
Migrating iteratively gives you a foundation to build upon with each component
But you need to have pieces that are independent enough to run some here and some there, and ideally pieces that can fail without taking down the whole system.
1. Stateful systems (databases, message brokers) are hard to switch back-and-forth; you often want to migrate each one as few times as possible.
2. If something goes sideways -- especially performance-wise -- it can be hard to tell the reason if everything changed.
3. It takes a long time (months/years) to complete the migration. By doing it incrementally, you can reap the advantages of the new infra, and avoid maintaining two things.
---
All that said, GitHub is doing something wrong.
Business by spreadsheet is super hard for this reason - if you try to charge the maximum you can before people get angry and leave then you're a tiny outage/issue/controversy/breach from tipping over the wrong side of that line.
One solution I see is (eg) internal forge (Gitlab/gitea/etc) and then mirrored to GH for those secondary features.
Which is funny. If GH was better we'd just buy their better plan. But as it stands we buy from elsewhere and just use GH free plans.
Mirroring is probably the way forward.
You did back it up, right? Right before you ran me with `--allow-dangerously-skip-permissions` and gave me full access to your databases and S3 buckets?
A people have replied to you mentioning Codeberg, but that service is intended for Open Source projects, not private commercial work.
Also very happy with SourceHut, though it is quite different (Forgejo looks like a clone of GitHub, really). The SourceHut CI is really cool, too.
Distributed source control is distributable.
It's pretty nice if you don't mind it being some of the heaviest software you've ever seen.
I also tried gitea, but uninstalled it when I encountered nonsense restrictions with the rationale "that's how GitHub does it". It was okay, pretty lightweight, but locking out features purely because "that's what GitHub does" was just utterly unacceptable to me.
ad hominem isn't a very convincing argument, and as someone who also enjoys forgejo it doesn't make me feel good to see as the justification for another recommender.
I personally use Gitea, so I'd appreciate some additional information.
Forgejo became a hard fork in 2024, with both projects diverging. If you're using it for local hosting I don't personally see much of a difference between them, although that may change as the two projects evolve.
[1] https://forgejo.org/compare-to-gitea/
All the more reason why they should be sliced and diced into oblivion.
just add a new git remote and push. less so for issues and and pulls, but at least your dev team/ci doesn't end up blocked.
The engineers who build the early versions were folks at the top of their field, and compensated accordingly. Those folks have long since moved on, and the whole thing is maintained by a mix of newcomers and whichever old hands didn't manage to promote out, while the PMs shuffle the UX to justify everyones salary...
I am able to access api.github.com at 20.205.243.168 no problem
No problem with githubusercontent.com either
Hosting .git is not that complicated of a problem in isolation.
"A better way is to self host". [0]
[0] https://news.ycombinator.com/item?id=22867803
Github is down so often now, especially actions, I am not sure how so many companies are still relying on them.
Edit: Now acknowledging issues across GitHub as a whole, not just PRs.
Codeberg gets hit by a fair few attacks every year, but they're doing pretty well, given their resources.
I am _really_ enjoying Worktree so far.
coincidence I think not!
Hopefully the hobbyists are willing to shell out for tokens as much as they expect.
Github isn't the only source control software in the market. Unless they're doing something obvious and nefarious, its doubtful the justice department will step in when you can simply choose one of many others like Bitbucket, Sourcetree, Gitlab, SVN, CVS, Fossil, DARCS, or Bazaar.
There's just too much competition in the market right now for the govt to do anything.
I doubt policymakers in the early 1900s could have predicted the impact of technology and globalization on the corporate landscape, especially vis a vis “vertical integration”.
Personally, I think vertical integration is a pretty big blind spot in laws and policies that are meant to ensure that consumers are not negatively impacted by anticompetitive corporate practices. Sure, “competition” may exist, but the market activity often shifts meaningfully in a direction that is harmful consumers once the biggest players swallow another piece of the supply chain (or product concept), and not just their competitors.
Not really. It's a network effect, like Facebook. Value scales quadratically with the number of users, because nobody wants to "have to check two apps".
We should buy out monopolies like the Chinese government does. If you corner the market, then you get a little payout and a "You beat capitalism! Play again?" prize. Other companies can still compete but the customers will get a nice state-funded high-quality option forever.
Simple: the US stopped caring about antitrust decades ago.
There was just a recent case with Google to decide if they would have to sell Chrome. Of course the Judge ruled no. Nowadays you can have a monopoly in 20 adjacent industries and the courts will say it's fine.
If a company can build a monopoly (or oligopoly) in multiple markets, it can then use these monopolies to build stability for them all. For example, Google uses ads on the Google Search homepage to build a browser near-monopoly and uses Chrome to push people to use Google Search homepage. Both markets have to be attacked simultaneously by competitors to have a fighting chance.
Gerrit is the other option I'm aware of but it seems like it might require significant work to administer.
I would love to pay Codeberg for managed hosting + support. GitLab is an ugly overcomplicated behemoth... Gitea offers "enterprise" plans but do they have all the needed corporate features? Bitbucket is a joke, never going back to that.
Today, when I was trying to see the contribution timeline of one project, it didn't render.
Radicle is the most exciting out of these, imo!
It's definitely some extra devops time, but claude code makes it easy to get over the config hurdles.
hopefully its down all day. we need more incidents like this to happen for people to get a glimpse of the future.
Self hosting would be a better alternative, as I said 5 years ago. [0]
[0] https://news.ycombinator.com/item?id=22867803
Maybe they need to get more humans involved because GitHub is down at least once a week for a while now.
But I don't understand if they're that good why are we getting an outage every other week? AWS had an outage unsolved for about 9+ hrs!
The new-fangled copilot/agentic stuff I do read about on HN is meaningless to me if the core competency is lost here.
Beyond a meme at this point
With the latter no longer a thing, and with so many other people building on Github's innovations, I'm starting to seriously consider alternatives. Not something I would have said in the past, but when Github's outages start to seriously affect my ability to do my own work, I can no longer justify continuing to use them.
Github needs to get its shit together. You can draw a pretty clear line between Microsoft deciding it was all in on AI and the decline in Github's service quality. So I would argue that for Github to gets its shit back together, it needs to ditch the AI and focus on high quality engineering.
Just when open source development has to deal with the biggest shift in years and maintainers need a tool that will help them fight the AI slop and maintain the software quality, GitHub not only can't keep up with the new requirements, they struggle to keep their product running reliably.
Paying customers will start moving off to GitLab and other alternatives, but GitHub is so dominant in open source that maintainers won't move anywhere, they'll just keep burning out more than before.
Any public source code hosting service should be able to subscribe to public repo changes. It belongs to the authors, not to Microsoft.
Incident with Pull Requests https://www.githubstatus.com/incidents/smf24rvl67v9
Copilot Policy Propagation Delays https://www.githubstatus.com/incidents/t5qmhtg29933
Incident with Actions https://www.githubstatus.com/incidents/tkz0ptx49rl0
Degraded performance for Copilot Coding Agent https://www.githubstatus.com/incidents/qrlc0jjgw517
Degraded Performance in Webhooks API and UI, Pull Requests https://www.githubstatus.com/incidents/ffz2k716tlhx
If we had a government worth anything, they ought to pass a law that other competitors be provided mirror APIs so that the entire world isn't shut off from source code for a day. We're just asking for a world wide disaster.
EDIT: my bad, seems to be their server's name.
https://github.blog/news-insights/unicorn/
https://news.ycombinator.com/item?id=4957986
https://en.wikipedia.org/wiki/Unicorn_(web_server)