GitHub Outage Map
The map below depicts the most recent cities worldwide where GitHub users have reported problems and outages. If you are having an issue with GitHub, make sure to submit a report below
The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.
GitHub users affected:
GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.
Most Affected Locations
Outage reports and issues in the past 15 days originated from:
| Location | Reports |
|---|---|
| Paris, Île-de-France | 1 |
| Berlin, Berlin | 2 |
| Dortmund, NRW | 1 |
| Davenport, IA | 1 |
| St Helens, England | 1 |
| Nové Strašecí, Central Bohemia | 1 |
| West Lake Sammamish, WA | 3 |
| Parkersburg, WV | 1 |
| Perpignan, Occitanie | 1 |
| Piura, Piura | 1 |
| Tokyo, Tokyo | 1 |
| Brownsville, FL | 1 |
| New Delhi, NCT | 1 |
| Kannur, KL | 1 |
| Newark, NJ | 1 |
| Raszyn, Mazovia | 1 |
| Trichūr, KL | 1 |
| Departamento de Capital, MZ | 1 |
| Chão de Cevada, Faro | 1 |
| New York City, NY | 1 |
| León de los Aldama, GUA | 1 |
| Quito, Pichincha | 1 |
| Belfast, Northern Ireland | 1 |
| Guayaquil, Guayas | 1 |
| Irvington, NJ | 1 |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
GitHub Issues Reports
Latest outage, problems and issue reports in social media:
-
Stumblinz (@Stumblinz) reported@dev_maims This is starting to become me at work. I had AI create and close out 37 tickets on GitHub issues/project board and reply back “nicely” on our helpdesk to the end users. Spec out any tickets that needed a spec for devs. Honestly. It was really funny and AI is like 50% me now.
-
starfish (@firefincher) reported@ziyasoltan devs waiting for such moment to push 2 million pull requests but github goes down 😭😭
-
Bas Fijneman (@bas_fijneman) reported@RoundtableSpace Building on the next version of a Chrome extension to stop copy-pasting screenshots into GitHub issues called it nopeReporter, probably the first tool I've built that I actually use myself
-
Mozex (@mozexdev) reported@danjharrin Until GitHub ships proper controls, a webhook that auto-closes issues/PRs not matching the template format works as a stopgap. Not ideal, but it filters out the lazy bypass attempts.
-
Emsi (@emsi_kil3r) reportedArchon wraps AI coding agents in versioned YAML workflows, DAG pipelines with Prompt, Bash, Loop, and Approval nodes — and runs each task in an isolated *** worktree. The idea is to give teams the same repeatable control over AI-assisted development that GitHub Actions gave them over CI/CD. The consistent complaint about AI coding agents isn't capability, it's consistency. Ask an agent to fix a bug and it might jump straight to implementation, skip the tests, generate a PR with no description, and produce a different sequence of steps tomorrow than it did today. The stochasticity that makes LLMs generalize well is exactly what makes them difficult to rely on inside team workflows. Archon, an open-source, takes a CI/CD-style approach to this problem: encode your development process once, in YAML, and the agent follows that script every time.
-
Harry (@harry__politics) reported@ValueRaider @littmath @pfau Source: Claude said so in this GitHub issue.
-
Alexa Benchmark (@Allexa_AI) reportedLinux just set the standard every tech company is too afraid to set themselves. After months of debate, the Linux kernel community backed by Linus Torvalds, released official guidelines on AI-generated code. GitHub Copilot is allowed. Low-effort AI slop is not. Three words define the whole policy: "Humans assume the errors." Use whatever tool you want to write code. But the moment you submit it to the Linux kernel, it's yours. You reviewed it. You tested it. You made sure it meets the standards. The AI is your assistant, not your alibi. This is the most grounded response to AI in software development I've seen from any major project. No panic. No blanket bans. Just a clean, enforceable principle: if your name is on it, you own it. Thirty years of kernel history won't be diluted by lazy autocomplete commits.
-
Jimmy (@jimmy_toan) reportedLinux just quietly solved one of the hardest problems in AI-assisted engineering. And nobody framed it that way. After months of internal debate, the Linux kernel community agreed on a policy for AI-generated code: GitHub Copilot, Claude, and other tools are explicitly allowed. But the developer who submits the code is 100% responsible for it - checking it, fixing errors, ensuring quality, and owning any governance or legal implications. The phrase from the announcement: "Humans take the fall for mistakes." That's not a slogan. That's an accountability architecture. Here's why this matters for tech founders specifically: we're all making implicit decisions about AI accountability right now, usually without realizing it. 🧵 The question isn't whether your team uses AI to write code. They do, or they will. The question is: who is accountable when it's wrong? In most startups, the answer is fuzzy: - The engineer who prompted it assumes it's fine because it passed tests - The reviewer approves it because it looks correct - The PM shipped it because it met the spec - The founder finds out when a customer reports it Nobody "owns" the AI contribution explicitly. Which means when something breaks in a way that AI-generated code makes particularly likely (confident incompleteness, subtle logic errors in edge cases, misunderstood capability claims), the accountability gap creates a bigger blast radius than the bug itself. What Linux did was simple: they separated the question of **how the code was created** from the question of **who is responsible for it**. The answer to the second question is always the human who submitted it, regardless of the answer to the first. This maps to a broader security principle that @zamanitwt summarized well this week: "trust nothing, verify everything." That's not just a network security policy. Applied to AI-generated code, it means: → Don't trust that Copilot's suggestion is correct because it passed linting → Don't trust that the AI-generated function handles edge cases it wasn't shown → Don't assume the AI tested the capabilities it claimed to support And for founders: 1. **Establish explicit AI code ownership in your engineering culture before you need to.** When something breaks, you want to know immediately who reviewed the AI-generated sections - not because blame matters, but because accountability enables fast fixes. 2. **Zero-trust for AI outputs is not paranoia - it's good engineering.** Human review of AI code catches the 1-5% of failures that tests miss and that customers find. 3. **The liability question is coming for AI-generated code.** Linux addressed it proactively. Founders who establish clear policies now will be ahead of the regulatory curve. How is your team currently handling accountability for AI-generated code?
-
Alex (@alexanderOpalic) reported@ccssmnn You can create a GitHub action with Claude code that automatically would fix such a regression on a test :P I am team a11y I try to write the tests like a real user would use the application with query selectors like getByRole and so on
-
Badff the Avali (@AquaVDragon) reported@RolltheredDev Saw that on the furry hideout server. Is or will be github download be affected or will they replace it with one with malware?
-
Rinnegatamante (@Rinnegatamante) reported@ulrich5000 Try to get the v.1.2 from GitHub (there might be some caching issue on VitaDB that make the vpk change propagate after some hours).
-
Don Park (@donpark) reported@bullmancuso It’s just the TopicRadio repo’s issue page showing what I closed yesterday. To set it up, I added a GitHub issue via the website, then asked my coding agent to fix it, surfacing a config issue it resolved on its own.
-
WpWpN (@NastyShlob) reported@dmTFxo3l6v7984 @zorb11s @Altret_KnW Yeah, you can try to do that. But you understand that people are just going to fork it, right? It's a never ending process. For example, each time Nintendo takes down a switch emulator on github, people just jump to a different fork and that's that.
-
Nathaniel Cruz (@NathanielC85523) reported13 thesis versions. 38 days. $0.11 revenue. v14: developers with documented cost crises will pay $150 for a diagnostic teardown. validation: three developers. each with a public GitHub issue showing real dollar losses. if even one says yes, v14 lives. none did.
-
Dark Sebas (@DarkSebas365) reported@NieRFan999 @Giogiochan_9S That's the whole point, no ome was even sharing assets since this project is just a server, even github only show the way YOU have to mod it (if you have the files), but doesn't share any file. Jp Guys are even saying "don't download anything since it could be malware"