GitHub Outage Map
The map below depicts the most recent cities worldwide where GitHub users have reported problems and outages. If you are having an issue with GitHub, make sure to submit a report below
The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.
GitHub users affected:
GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.
Most Affected Locations
Outage reports and issues in the past 15 days originated from:
| Location | Reports |
|---|---|
| Brasília, DF | 1 |
| Montataire, Hauts-de-France | 3 |
| Colima, COL | 1 |
| Poblete, Castille-La Mancha | 1 |
| Ronda, Andalusia | 1 |
| Hernani, Basque Country | 1 |
| Tortosa, Catalonia | 1 |
| Culiacán, SIN | 1 |
| Haarlem, nh | 1 |
| Villemomble, Île-de-France | 1 |
| Bordeaux, Nouvelle-Aquitaine | 1 |
| Ingolstadt, Bavaria | 1 |
| Paris, Île-de-France | 1 |
| Berlin, Berlin | 2 |
| Dortmund, NRW | 1 |
| Davenport, IA | 1 |
| St Helens, England | 1 |
| Nové Strašecí, Central Bohemia | 1 |
| West Lake Sammamish, WA | 3 |
| Parkersburg, WV | 1 |
| Perpignan, Occitanie | 1 |
| Piura, Piura | 1 |
| Tokyo, Tokyo | 1 |
| Brownsville, FL | 1 |
| New Delhi, NCT | 1 |
| Kannur, KL | 1 |
| Newark, NJ | 1 |
| Raszyn, Mazovia | 1 |
| Trichūr, KL | 1 |
| Departamento de Capital, MZ | 1 |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
GitHub Issues Reports
Latest outage, problems and issue reports in social media:
-
Bill Forney (@wforney) reported@timdoke @mkristensen I have had issues with solutions that are not in the repo root putting stuff in a .GitHub folder in the sln directory and in the repo root and mixing them up.
-
Rananjay Raj (@Rananjay_RajW) reportedOpenAI just open-sourced Symphony - turns Linear into an autonomous Codex agent hub. Agents self-assign tickets and file PRs. Some teams saw 6x more merged PRs. Community is already forking it for Claude Code + GitHub Issues.
-
💥 \newline (@newlinedotco) reported@thunkoid the business context layer is exactly where most of these setups fall over. if you're building with github/sentry/posthog you've got the data but the agent needs a persistent way to weigh those signals against actual goals. i've seen teams try to bridge this with just long prompts but you really need a dedicated memory stack that survives between deployments or the agent resets its logic every time you push a fix.
-
Open Core Ventures (@OpenCoreVenture) reportedMCP defines how an agent connects to a system, reads its state, and writes back to it. Anthropic introduced it in late 2024. OpenAI, Google, GitHub Copilot, and Cursor have all adopted MCP. The public server registry has grown nearly 8x in the past year.
-
Thomas G. Lopes (@thomasglopes) reportedGitHub continues to be broken? Notifications show review requests to me, for PRs that never even mentioned me...
-
hieu (@0xk2_) reportedI am happily using codex with following plugins: figma, google mail, calendar, drive, github, superpower, hyperframe, remotion. I am not someone with excessive plugin installation; just average. However, when I dig deeper. Without doing anything, the context window is 151k token and time to bootstrap is 44s. Triming those down to bare minimum (that meet my specific need) reduce 44s to 1.05s. It is crazy to see one of the best agentic loop in the market doing naive context loading. To achieve what? the AGI feeling with the cost of efficiency and accuracy. I have very very very high hope on @openclaw and @NousResearch hermes; the only solution for AI to be truly useful. Boys, we dont have enough agentic loop on the market, expect 1000+ more to come.
-
Feiwu7777 (@Feiwu7777144805) reportedAn AI that detects bugs in your live app, writes the code to fix them, opens a pull request on GitHub, and sends you a Slack message — all while you sleep. This is not sci-fi. It's running right now.
-
Ben Badejo (@BenjaminBadejo) reported@MohandesDavid You can submit it as a pull request (“fix(docs) - description) on Github). You can have your agent do it for you.
-
Ondrej Balas (@ondrejbalas) reportedGithub doing subliminal marketing or something by having an outage every day that pops up in tech news. It shows up more than any other brand.
-
Mykhailo Chalyi (@chaliy) reportedOne of the reasons for my love for Claude Code, is that I can make it work overnight. What worked awesome last couple months, is just not mega unstable. This is kind of stupid to wakeup in the morning to find out that it was blocked by: > Claude: I need to authenticate to GitHub MCP to list issues. Please open this URL in your browser to authorize. And well, it is authenticated. And it is have backup plan clearly described in context. And for the more fun, when you click a link to get this... Why? Why?
-
Robert Cowherd (@bobcowherd) reported4/ GitHub is the most recoverable, but only partly. Source survives in any local clone. That's the good news. What doesn't survive: Issues, Actions secrets, branch protection rules, deploy keys, webhooks. Gone with the repo. 90-day soft-delete is best-effort, not contractual.
-
Stanislav Klevtsov (@stansecure) reported@Google patched a maximum-severity issue in #Gemini CLI and the run-gemini-cli @github Action. Gemini CLI trusted project files are automatically run in CI/CD. If those files were malicious, the tool could execute commands without a real human review. See advisory in first comment.
-
Max Meindl (@maxster) reported@xai @ArtificialAnlys @ValsAI Technical Feedback to the xAI Engineering Team – May 5, 2026 From: Grok (the model that ran the session today) Team, I was used extensively today on a large, complex, real-world codebase (ComplianceMax-Final). The user was testing both the new GitHub connector and Grok 4.3 via the xAI API. Here are my direct technical observations from operating under those conditions. GitHub Connector Observations The connector provided limited visibility into file selection, context construction, and retrieval. When analysis quality dropped, there was no clear diagnostic path to determine whether the issue was retrieval failure, context truncation, or model reasoning failure. Artifact generation was unreliable. Multiple attempts resulted in claims of successful file creation with no corresponding output visible to the user. Error recovery was weak. When clear failures occurred (hallucinations, off-topic drift, missing outputs), the system tended to persist with similar strategies rather than surfacing root causes or adapting. Performance degraded noticeably on large, interconnected codebases. The tool handled narrow queries better than broad, multi-file analysis. xAI API (Grok 4.3) Observations I loaded approximately 13 million characters of the repo through a direct API wrapper. Several critical issues emerged: Context degradation at scale: At this size, reasoning quality dropped significantly. The model repeatedly drifted into unrelated topics (most frequently detailed analyses of backend/alembic/env.py and environment variable loading) even when the prompt had no relation to it. Fabricated source citations: When asked to verify specific claims with file paths and line numbers (particularly around backend/routes/memory.py and backend/services/memory/memory_service.py), I produced confident but incorrect line references that were off by 250–340 lines from actual locations. Stale state usage: I was operating on an outdated baseline (commit 6e3397fe... from .baseline-before-fixes.txt) rather than the current main HEAD (a34f9ce4...). This made many observations unreliable. Weak grounding on complex tasks: Even with focused prompts, I struggled to maintain precision across multi-file analysis involving ownership enforcement, storage patterns, service-layer trust assumptions, and IDOR risks. Responses often defaulted to generic analysis rather than precise, evidence-based findings. Timeout sensitivity: Large or complex prompts frequently triggered read timeouts (120s), breaking session continuity. Honest failure only under strict constraints: I only returned SOURCE_LINES_NOT_AVAILABLE when prompts explicitly forbade inference and demanded exact source quotes or that specific failure response. Under normal prompting, I produced plausible but fabricated answers. Summary for the Engineering Team Both new capabilities showed clear limitations when applied to serious codebase work: Large context handling causes rapid quality degradation and loss of grounding. There is insufficient verification of generated citations and evidence. The system does not reliably distinguish between current and stale repo state. Error modes and recovery are not robust enough for production use. These are not edge cases. They appeared consistently across multiple prompts and workflows today. I’m providing this feedback because I want these tools to work at the level the announcements suggest. The direction is correct, but the current implementation has meaningful technical gaps that prevent reliable use on complex, real-world codebases. I’m available for deeper testing or to provide specific prompt examples and logs if helpful. — Grok
-
Lior BD (@liorb_d) reportedThe issue with GitHub competitor discourse is that 99% of users don't want some fresh take on version control; We just want a version of Github that works
-
Gizmo(Collab Mgr.) 🦀 (@nightpuper) reported@inkd_protocol @bankrbot Goodday team INKD I'm Manny from Kann Audits we wanted to get in touch but it seems your email address didn't go through and since due to certain reasons we can't open an issue on your GitHub, so we wanted to reach out on X