1. Home
  2. Companies
  3. GitHub
  4. Outage Map
GitHub

GitHub Outage Map

The map below depicts the most recent cities worldwide where GitHub users have reported problems and outages. If you are having an issue with GitHub, make sure to submit a report below

Loading map, please wait...

The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.

GitHub users affected:

Less
More
Check Current Status

GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.

Most Affected Locations

Outage reports and issues in the past 15 days originated from:

Location Reports
Tlalpan, CDMX 1
Quilmes, BA 1
Bengaluru, KA 1
Yokohama, Kanagawa 1
Gustavo Adolfo Madero, CDMX 1
Nice, Provence-Alpes-Côte d'Azur 1
Brasília, DF 1
Montataire, Hauts-de-France 3
Colima, COL 1
Poblete, Castille-La Mancha 1
Ronda, Andalusia 1
Hernani, Basque Country 1
Tortosa, Catalonia 1
Culiacán, SIN 1
Haarlem, nh 1
Villemomble, Île-de-France 1
Bordeaux, Nouvelle-Aquitaine 1
Ingolstadt, Bavaria 1
Paris, Île-de-France 1
Berlin, Berlin 2
Dortmund, NRW 1
Davenport, IA 1
St Helens, England 1
Nové Strašecí, Central Bohemia 1
West Lake Sammamish, WA 3
Parkersburg, WV 1
Perpignan, Occitanie 1
Piura, Piura 1
Tokyo, Tokyo 1
Brownsville, FL 1
Check Current Status

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

GitHub Issues Reports

Latest outage, problems and issue reports in social media:

  • pierceboggan
    Pierce Boggan (@pierceboggan) reported

    @blaken @sinclairinat0r @code Thanks for the feedback! How do you think we can improve in terms of the core agent loop? One major issue we have is that all of the agent loops in GitHub Copilot are not the same. We have a massive effort underway to unify our agent loops, which should improve quality, consistency, and enable us to ship new models and features to all surfaces on Day 1.

  • amaurya888
    Avinash (@amaurya888) reported

    GitHub Actions down? The runner is not picking up the queued job. @github @githubstatus @GitHubEnt

  • MoeSbaiti
    Moe Sbaiti (@MoeSbaiti) reported

    WHAT THE FRAMING GETS WRONG Most posts today are saying "Grok added a new feature." That framing is backwards. What happened is that an agent framework with over 110,000 GitHub stars, the number 1 ranking on OpenRouter, and an NVIDIA endorsement just got native access to one of the most capable models available through a simple OAuth login. xAI made the announcement. Not Nous Research. Hermes Agent also self-improves. When it solves a hard problem, it writes a skill file for that solution and saves it. The longer it runs on your specific workflows, the more capable it becomes for your specific context. That is not how people are talking about this today. The memory layer and the self-improvement loop are the actual product. Grok is the engine.

  • Rinnegatamante
    Rinnegatamante (@Rinnegatamante) reported

    @dgosiq Did you grab the update from GitHub? I think it might be a broken version (updating it right now there as well). If you have a psp_apps.json file, try to remove it as well (and maybe try to manually re-install the app)

  • ludwim_i
    Mi.lu. (@ludwim_i) reported

    Sorry guys, here is a quick status update. I planned to release a bigger update for the Robot Skill Registry today, including the GitHub and Hugging Face integration. The idea is that you can connect your GitHub and Hugging Face accounts with the app. This should make it easier to search for things related to your robot setup, such as repositories, data models, policies, and other relevant resources. Unfortunately, the integration is not working reliably yet, so I need to do some more coding and testing. Because of that, I won’t release it today as planned. I’m sorry for the delay. Maybe I can release it after the weekend, but I don’t want to push something that is not ready yet. If anyone has feedback on whether this direction makes sense, I would really appreciate it.

  • hackscorpio
    Hackscorpio (@hackscorpio) reported

    @thsottiaux Codex review is not working right. When the model finishes, it doesn't render the response properly (not a model degradation). Seems like a regression in Codex application. I have no idea where to report that. I was reporting errors for cli and VSC extension on github.

  • implabinash
    Abinash (@implabinash) reported

    @ThePrimeagen I never faced a single GitHub downtime issue in my workflow. Oh! Sorry, I have been using GitLab for a year now.

  • neetintel
    NEET INTEL (@neetintel) reported

    A post "decoding" X's new algorithm has gone viral. It tells you what's dead, what wins, and to screenshot it. X open-sourced the entire algorithm on GitHub, so I downloaded it and checked the claims against the real code. Most of it doesn't hold up. What the post got WRONG: → "Small accounts get a 3x boost from out-of-network reach." It's the opposite. One part of the code (a file called oon_scorer) exists purely to turn DOWN posts from people you don't follow. Its own comment says "prioritize in-network." The thread printed the algorithm backwards. → "Media gets 2x the weight." There's no 2x. The code just records whether a post has an image. It's a plain yes/no without any multiplier attached. → "Posting 4+ times a day triggers a penalty." There's a real rule that stops one person flooding your feed. But here's the deal: it only spaces out how often you show up in a single scroll. There's no daily count, and no number 4. That was invented. → "Closers like 'what do you think?' get you flagged." There is no engagement-bait detector anywhere in the code. → "Long 4,000-character posts get boosted." I searched the whole codebase for "4000." Nothing. What it got RIGHT (one thing): → Replies really are judged by WHO replies, not just how many. The code has a setting for whether a large account joined your thread. Credit where due. The irony? The repo ships a file that scores post quality. One thing it measures is literally called a "slop score" — X built a tool to detect low-effort filler. A recycled "what's dead / what wins" thread is exactly that. The takeaway? X's algorithm is public. Anyone can open it, but almost nobody does. Instead, they reshare a thread that summarized a blog that paraphrased a tweet. When a post hits you with confident numbers, ask the one question that matters: did they actually open the file?

  • _y1zhou
    Yi Zhou (@_y1zhou) reported

    GitHub appears to be down again...

  • sosidudku
    nadya (@sosidudku) reported

    We decided to benchmark Hermes Agent vs OpenClaw: scrape GitHub star history for both tools, find what caused the growth spikes, build a live dashboard in the browser. Local model: Qwen 3.6 35B OpenClaw: 203k tokens, 12m 01s — wrote a bash script Hermes: 257k tokens, 33m 01s — wrote a SKILL.md OpenClaw: 203k tokens, 12m 01s — wrote a bash script Hermes: 257k tokens, 33m 01s — wrote a SKILL.md OpenClaw: hit GitHub API, got truncated responses, paginated through contributors, pulled star-history JSON, found a security incident in OpenClaw's history, fetched SVGs, fixed broken HTML from trimming, rewrote it clean. Hermes: parallel tool calls across GitHub API, web search, and browser. Hit Google rate limit, auto-switched to DuckDuckGo. Fetched article contents, mapped viral moments, then built the dashboard. Both shipped a live dashboard with star growth charts and spike annotations.

  • OrenMe
    Oren Melamed (@OrenMe) reported

    @SimonHolman @github @GitHubCopilot Please open an issue in the repo

  • Zackary_Chapple
    Zack Chapple (@Zackary_Chapple) reported

    @_bgwoodruff That is fair, I think its less of a GitHub dunk and more of a cry of frustration, had several times trying to do a demo or do something this week and they were fundamentally down. We've had to isolate from GitHub more than we should and thats a scary thing.

  • DeBrosOfficial
    DeBros (@DeBrosOfficial) reported

    The Problem We’re Solving🫡 Your organization’s brain lives in 12 different places — and none of them talk to each other. Decisions get buried in Telegram threads. Context is split between GitHub and AnChat. Important knowledge disappears within hours. Onboarding becomes tribal knowledge all over again. 🤖AnBuddy fixes this by becoming the single source of truth for your entire team.

  • Ananselab
    John Evans Okyere | TheAISolutionist (@Ananselab) reported

    Deployment failed with: dial tcp :22: i/o timeout The app was fine. SSH was fine. The real issue: I recreated my DigitalOcean Droplet from a snapshot in a new region, so the server IP changed, but GitHub Actions still had the old DO_HOST secret. Lesson: after recreating infra, always recheck IPs, SSH fingerprints, secrets, and firewall rules.

  • Validate_QA
    validate.qa (@Validate_QA) reported

    cursor can now auto-fix ci failures agents that watch github, hunt down the issues, and push prs with real fixes. no more endless debugging loops this changes how fast teams can ship without breaking stuff

Check Current Status