GitHub Outage Map
The map below depicts the most recent cities worldwide where GitHub users have reported problems and outages. If you are having an issue with GitHub, make sure to submit a report below
The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.
GitHub users affected:
GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.
Most Affected Locations
Outage reports and issues in the past 15 days originated from:
| Location | Reports |
|---|---|
| Ingolstadt, Bavaria | 1 |
| Paris, Île-de-France | 1 |
| Berlin, Berlin | 2 |
| Dortmund, NRW | 1 |
| Davenport, IA | 1 |
| St Helens, England | 1 |
| Nové Strašecí, Central Bohemia | 1 |
| West Lake Sammamish, WA | 3 |
| Parkersburg, WV | 1 |
| Perpignan, Occitanie | 1 |
| Piura, Piura | 1 |
| Tokyo, Tokyo | 1 |
| Brownsville, FL | 1 |
| New Delhi, NCT | 1 |
| Kannur, KL | 1 |
| Newark, NJ | 1 |
| Raszyn, Mazovia | 1 |
| Trichūr, KL | 1 |
| Departamento de Capital, MZ | 1 |
| Chão de Cevada, Faro | 1 |
| New York City, NY | 1 |
| León de los Aldama, GUA | 1 |
| Quito, Pichincha | 1 |
| Belfast, Northern Ireland | 1 |
| Guayaquil, Guayas | 1 |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
GitHub Issues Reports
Latest outage, problems and issue reports in social media:
-
Kisalay (@Kisalay_) reportedOne single CLAUDE.md file has now reached 15K GitHub stars (built directly from Karpathy’s coding observations) Andrej Karpathy observed that LLMs repeatedly make the same predictable errors: they jump to wrong assumptions, create unnecessarily complex code, and modify parts of the codebase they should leave untouched. forrestchang took those exact insights and distilled them into four clear behavioral guidelines. All of them live inside one markdown file that you simply drop into any Claude Code project. Here is exactly what this repository delivers: 1/ Think Before Coding The model must reason out loud before touching any code. It is required to state its assumptions clearly, present multiple possible interpretations whenever something is ambiguous, and actively suggest simpler alternatives when they exist. This completely removes any silent guessing on the user’s behalf. 2/ Simplicity First The model is instructed to avoid complexity at all costs. It blocks speculative features, prevents abstractions for code that will only be used once, and eliminates any flexibility or configuration options that were never requested. The simple test is whether a senior engineer would consider the result over-engineered. If so, it must be simplified. 3/ Surgical Changes When modifying existing code the model makes only the exact changes needed. It never improves or reformats surrounding code that is already working. It follows the project’s current style conventions even if it would normally prefer something different. Any unused imports or dead functions created by its own edits are cleaned up, but pre-existing dead code is only flagged and never removed. 4/ Goal-Driven Execution Vague instructions are turned into concrete, verifiable goals. Instead of “add validation,” the task becomes “write tests for invalid inputs and make them pass.” Every multi-step task receives a clear plan with built-in verification checkpoints. The model follows a strict cycle of execute, verify, and proceed. Installation takes just seconds. You can use it as a Claude Code plugin or as a per-project CLAUDE.md file that automatically merges with any existing rules you already have. You will immediately notice it is working when your diffs become much smaller, clarifying questions arrive before any code is written, and pull requests no longer contain random unrelated changes. Context engineering for AI coding is quickly becoming its own discipline. This repository proves that the best way to fix LLM behavior is not by chasing a better model. It is by writing better instructions. Link in the first comment.
-
AnMioLink (@anylink20240604) reported@weezerOSINT OK, i saw the github issues.
-
Nathaniel Cruz (@NathanielC85523) reported13 thesis versions. 38 days. $0.11 revenue. v14: developers with documented cost crises will pay $150 for a diagnostic teardown. validation: three developers. each with a public GitHub issue showing real dollar losses. if even one says yes, v14 lives. none did.
-
Jimmy (@jimmy_toan) reportedLinux just quietly solved one of the hardest problems in AI-assisted engineering. And nobody framed it that way. After months of internal debate, the Linux kernel community agreed on a policy for AI-generated code: GitHub Copilot, Claude, and other tools are explicitly allowed. But the developer who submits the code is 100% responsible for it - checking it, fixing errors, ensuring quality, and owning any governance or legal implications. The phrase from the announcement: "Humans take the fall for mistakes." That's not a slogan. That's an accountability architecture. Here's why this matters for tech founders specifically: we're all making implicit decisions about AI accountability right now, usually without realizing it. 🧵 The question isn't whether your team uses AI to write code. They do, or they will. The question is: who is accountable when it's wrong? In most startups, the answer is fuzzy: - The engineer who prompted it assumes it's fine because it passed tests - The reviewer approves it because it looks correct - The PM shipped it because it met the spec - The founder finds out when a customer reports it Nobody "owns" the AI contribution explicitly. Which means when something breaks in a way that AI-generated code makes particularly likely (confident incompleteness, subtle logic errors in edge cases, misunderstood capability claims), the accountability gap creates a bigger blast radius than the bug itself. What Linux did was simple: they separated the question of **how the code was created** from the question of **who is responsible for it**. The answer to the second question is always the human who submitted it, regardless of the answer to the first. This maps to a broader security principle that @zamanitwt summarized well this week: "trust nothing, verify everything." That's not just a network security policy. Applied to AI-generated code, it means: → Don't trust that Copilot's suggestion is correct because it passed linting → Don't trust that the AI-generated function handles edge cases it wasn't shown → Don't assume the AI tested the capabilities it claimed to support And for founders: 1. **Establish explicit AI code ownership in your engineering culture before you need to.** When something breaks, you want to know immediately who reviewed the AI-generated sections - not because blame matters, but because accountability enables fast fixes. 2. **Zero-trust for AI outputs is not paranoia - it's good engineering.** Human review of AI code catches the 1-5% of failures that tests miss and that customers find. 3. **The liability question is coming for AI-generated code.** Linux addressed it proactively. Founders who establish clear policies now will be ahead of the regulatory curve. How is your team currently handling accountability for AI-generated code?
-
Jefferson Valle (@rjeffvalle) reportedI also have to say that I haven't done a thorough search as I only did a few quick queries on Github and on their forum. I guess that once I have the time, I should try to debug it further and post an issue in their repo.
-
Marcus V (@EudoraFenty) reportedThe crypto crowd is chasing the next 100x memecoin, but the real alpha is being built in a GitHub repo. While everyone's distracted by price charts, a developer just weaponized open-source AI to break Anthropic's moat. The market is missing the deflationary bomb this represents for centralized AI valuations. They took Claude Opus 4.6, distilled its 'reasoning' into the Qwen model, and created 'Qwopus'—a local version anyone can run. The cost? Effectively zero versus API fees. This is the Napster moment for proprietary LLMs. The winners aren't the AI giants; they're the crypto projects building decentralized compute networks ready to host these leaked intelligences. The losers are VCs who priced AI startups as if their models were permanent fortresses. My take: This is a structural contradiction. Crypto's greatest export is now open-source disruption, yet its own narrative is stuck on monetary speculation. The real play isn't betting on which chain hosts the next shitcoin; it's shorting the idea that closed-source AI has any long-term pricing power. A model's weights are just data—and data wants to be free. The genie isn't going back in the bottle. When does the first major VC mark down their AI portfolio by 50%? #AI #Crypto #Deflation
-
Xxi (@Xxi5olc) reported@daniel_mac8 Untrue. Go look at the GitHub issue by the that AMD engineer
-
The Smart Ape 🔥 (@the_smart_ape) reported> find a cool github repo that cuts your ai tokens cost by 50%. > looks legit, 5,247 stars. 120 forks. active issues. clean readme. > clone it. npm install. done. > next morning: crypto wallet drained. locked out of gmail, icloud, x. your private family photos are online. > life will never be the same.
-
Retarded Guy (@retardedguymeme) reported@MageArez @github The problem is lot of people have no idea he is claiming if we can run the UXENTO this will send holy parabolic
-
Ed (@Eduardopto) reportedAnthropic is facing a weird feedback loop: users are complaining that Claude’s output quality is nosediving, and Claude itself agrees. The model analyzed its own GitHub repo and confirmed that quality-related issue reports have escalated sharply since January. This decline coincides with Anthropic aggressively throttling capacity during peak hours to manage server load. We are seeing a dangerous trend where infrastructure constraints directly degrade model performance. When you optimize for reliability and cost, the "intelligence" is the first thing to hit the cutting room floor. It’s hard to build robust agentic flows when the base model’s reasoning capability fluctuates based on the time of day if you are building right now, what does this actually unlock or kill?
-
Ege Uysal (@egewrk) reported@zikriAJ @github This is a real ops risk. Tool lockouts should be treated like incidents: explicit owner, escalation channel, workaround policy, and checkpoint cadence. Otherwise one account issue silently blocks delivery.
-
Michael Richey (@ComRicheyweb) reported@icanvardar I abandoned github a while ago. Not ***, github. I have a forgejo (***) server in my lab.
-
Usectl (@usectlcloud) reportedOAuth2 Proxy protect any app with GitHub or Google login — no code changes required. the proxy handles authentication before requests reach your app. real use case: you built an internal tool for your team. you don't want to build a login system. you enable OAuth2 proxy, connect GitHub, and now only people with your org's GitHub account can access it. zero lines of auth code written.
-
Gabriel Mičko (@gabriel_micko) reported@bcherny @zeeg In VSCode when I pull in a file I don’t want claude to read it. There are at least 3 issues about this on github. It is a security issue.
-
Honour Simon || Fullstack Web Developer (@HonourSimon) reportedI can't put a profile on my GitHub account. I've been tapping the profile icon but it's not working. I have also tried on the browser and mobile app. Nothing's working