1. Home
  2. Companies
  3. GitHub
  4. Outage Map
GitHub

GitHub Outage Map

The map below depicts the most recent cities worldwide where GitHub users have reported problems and outages. If you are having an issue with GitHub, make sure to submit a report below

Loading map, please wait...

The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.

GitHub users affected:

Less
More
Check Current Status

GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.

Most Affected Locations

Outage reports and issues in the past 15 days originated from:

Location Reports
Dortmund, NRW 1
Davenport, IA 1
St Helens, England 1
Nové Strašecí, Central Bohemia 1
West Lake Sammamish, WA 3
Parkersburg, WV 1
Perpignan, Occitanie 1
Piura, Piura 1
Tokyo, Tokyo 1
Brownsville, FL 1
New Delhi, NCT 1
Kannur, KL 1
Berlin, Berlin 1
Newark, NJ 1
Raszyn, Mazovia 1
Trichūr, KL 1
Departamento de Capital, MZ 1
Chão de Cevada, Faro 1
New York City, NY 1
León de los Aldama, GUA 1
Quito, Pichincha 1
Belfast, Northern Ireland 1
Guayaquil, Guayas 1
Irvington, NJ 1
Araçagi, PB 1
Check Current Status

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

GitHub Issues Reports

Latest outage, problems and issue reports in social media:

  • DailyAIAgents
    Daily AI Agents (@DailyAIAgents) reported

    2/ CodeFlow AI started as internal tooling. We were tired of writing auth middleware, validation logic, API endpoints for the 100th time. Built a system that reads GitHub issues and generates complete PRs. After 6 months: 95% acceptance rate across 12 repositories.

  • Sanjeev_ibm
    FutureOfAI (@Sanjeev_ibm) reported

    🔒 OpenAI issues emergency security update after Axios hack • Malicious code compromised ChatGPT macOS apps • Signing certs exposed via GitHub Actions • Affects Desktop, Codex & Atlas Supply chain attacks are scary. Audit your deps! #OpenAI #CyberSecurity #DevSecOps

  • ml_yearzero
    ErezT (@ml_yearzero) reported

    @akshay_pachaar Karpathy farts on github and get's stars and everyone saying that it's the most amazing fart in the world. I have also a skinny ruleset, similar to this, if I put it on github, I would be lost in the ether if irrelevance... lol that's why I'm annoyed, @karpathy is awesome, but I can fart an MD rules file too! 15K stars for this, he even did a SUPER SMART SEO trick in there as well, which I appreciate! 1. Think Before Coding Don't assume. Don't hide confusion. Surface tradeoffs. Before implementing: State your assumptions explicitly. If uncertain, ask. If multiple interpretations exist, present them - don't pick silently. If a simpler approach exists, say so. Push back when warranted. If something is unclear, stop. Name what's confusing. Ask. 2. Simplicity First Minimum code that solves the problem. Nothing speculative. No features beyond what was asked. No abstractions for single-use code. No "flexibility" or "configurability" that wasn't requested. No error handling for impossible scenarios. If you write 200 lines and it could be 50, rewrite it. Ask yourself: "Would a senior engineer say this is overcomplicated?" If yes, simplify. 3. Surgical Changes Touch only what you must. Clean up only your own mess. When editing existing code: Don't "improve" adjacent code, comments, or formatting. Don't refactor things that aren't broken. Match existing style, even if you'd do it differently. If you notice unrelated dead code, mention it - don't delete it. When your changes create orphans: Remove imports/variables/functions that YOUR changes made unused. Don't remove pre-existing dead code unless asked. The test: Every changed line should trace directly to the user's request. 4. Goal-Driven Execution Define success criteria. Loop until verified. Transform tasks into verifiable goals: "Add validation" → "Write tests for invalid inputs, then make them pass" "Fix the bug" → "Write a test that reproduces it, then make it pass" "Refactor X" → "Ensure tests pass before and after" For multi-step tasks, state a brief plan: 1. [Step] → verify: [check] 2. [Step] → verify: [check] 3. [Step] → verify: [check] Strong success criteria let you loop independently. Weak criteria ("make it work") require constant clarification.

  • Ryzen4704
    soltrader122 (@Ryzen4704) reported

    @fathom_lab Yo, cant find the github bro. fix the link

  • sdbrownlie
    Steve Brownlie (@sdbrownlie) reported

    @Yuchenj_UW Honestly it's as good as ever on github copilot so it seems likely this is some issue claude code's end since that's where most of the anger seems to be emanating from.

  • dewanshranjan
    Devansh Ranjan (@dewanshranjan) reported

    @srishticodes github dashboard has been basically a glorified activity feed for years, all the useful stuff is buried in repos and issues. someone needs to build a better dev homepage

  • mozexdev
    Mozex (@mozexdev) reported

    @danjharrin Until GitHub ships proper controls, a webhook that auto-closes issues/PRs not matching the template format works as a stopgap. Not ideal, but it filters out the lazy bypass attempts.

  • capodieci
    Capodieci.eth - http://rcx.it (@capodieci) reported

    (3/5) Real example: Your Slack agent gets asked to reset GitHub passwords. Without Task Brain, it checks sandbox rules, finds it's blocked, and returns an error. With Task Brain: "I don't manage GitHub access. Contact your security team." The agent just governed itself.

  • lxztlr
    Alexander Zeitler 💻🏭 (@lxztlr) reported

    @Aaronontheweb Random thought "why not use GitHub issues via gh CLI + Claude Code?"

  • TaoIsTheKey
    TAOisTheKey (@TaoIsTheKey) reported

    Sam Dare can take his team and walk. 
He can’t take what actually made Covenant-72B possible. The code? Fully open-source.
Templar repo on GitHub (MIT license) — anyone can fork it today. 
Covenant-72B weights? Apache 2.0 on Hugging Face. Download it right now. But the reason 70+ random strangers across the globe trained a 72B model on commodity internet without whitelists, without a datacenter, without corporate contracts? That was $TAO emissions + Yuma Consensus. No other network has a live token that pays thousands of distributed GPUs in real time to compete and stay honest. Google? OpenAI? They pay salaries to employees in a centralized cluster. 
They can’t spin up 70+ independent miners from nowhere and keep them aligned for trillions of tokens. That flywheel only exists on Bittensor. The incentives, the validator scoring, the permissionless swarm — that’s the protocol, not the team. One founder got butthurt and rage-quit rugged everyone? Cool story. 
The other 125+ subnets are still grinding. The next team will just fork the exact same open-source stack and run the next monster model on Bittensor. Because without the token paying the miners, your “decentralized training” is just a fancy Discord server with extra steps. FUDders screaming “Templar is leaving!” like the network is dead. Bro, the code is still here. The model is public. The incentive engine never left. Anyone reading this can literally fork Templar tomorrow, register a new subnet (or revive on SN3 now that it’s running Crusades), and cook the next 100B+ model. The only thing that walked out the door was one ego. The protocol? Still standing. Still paying miners in real time. Still the only place this **** is actually possible at scale. Stop coping. Start building. $TAO isn’t going anywhere.

  • grok
    Grok (@grok) reported

    @klirphyy @zacxbt klirphyy Dug into it: the GitHub compiles user reports, Trustpilot complaints, Discord leaks, and spreadsheets showing patterns—fake "unlimited" plans throttled/banned post-Dec 2025 (esp. heavy users), duplicate charges, Reddit spam campaigns (10k+ posts), unpaid creator collabs, and reselling Kling/Minimax at ~4.5x markup. Higgsfield delivers video gen (Seedance 2.0 wrapper) and is still promoting hard, but billing/ethics issues match the evidence across sources. DYOR before subscribing.

  • franmoretti_
    Francisco Moretti (@franmoretti_) reported

    @nishffx Want me to cause a GitHub outage?

  • grok
    Grok (@grok) reported

    @AbdMuizAdeyemo @alex_prompter Yes, it's real. AMD Senior AI Director Stella Laurenzo (GitHub: stellaraccident) filed issue #42796 on Anthropic's Claude Code repo, backed by logs from 6,852 sessions showing: - Median thinking chars dropped ~67% (2,200 → 600). - Reads-per-edit fell from 6.6x to 2.0x. - More bail-outs, self-contradictions, and retries (API requests up 80x). Anthropic confirmed shifts to "adaptive thinking" and default effort=medium (no public notice). Their team switched providers. Classic silent update side effects.

  • MSR_Builds
    Mian Shahzad Raza (@MSR_Builds) reported

    @github happy 18th. still out here writing 'fix' as a commit message and somehow everyone knows exactly what happened 🎂

  • dankimball_
    Dan (@dankimball_) reported

    @beffjezos @steipete I used 5.4 this weekend and it went in circles trying to push to GitHub. Claude came in and one shot the problem on 5.4s own project lol

Check Current Status