GitHub Outage Map
The map below depicts the most recent cities worldwide where GitHub users have reported problems and outages. If you are having an issue with GitHub, make sure to submit a report below
The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.
GitHub users affected:
GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.
Most Affected Locations
Outage reports and issues in the past 15 days originated from:
| Location | Reports |
|---|---|
| Yokohama, Kanagawa | 1 |
| Gustavo Adolfo Madero, CDMX | 1 |
| Nice, Provence-Alpes-Côte d'Azur | 1 |
| Brasília, DF | 1 |
| Montataire, Hauts-de-France | 3 |
| Colima, COL | 1 |
| Poblete, Castille-La Mancha | 1 |
| Ronda, Andalusia | 1 |
| Hernani, Basque Country | 1 |
| Tortosa, Catalonia | 1 |
| Culiacán, SIN | 1 |
| Haarlem, nh | 1 |
| Villemomble, Île-de-France | 1 |
| Bordeaux, Nouvelle-Aquitaine | 1 |
| Ingolstadt, Bavaria | 1 |
| Paris, Île-de-France | 1 |
| Berlin, Berlin | 2 |
| Dortmund, NRW | 1 |
| Davenport, IA | 1 |
| St Helens, England | 1 |
| Nové Strašecí, Central Bohemia | 1 |
| West Lake Sammamish, WA | 3 |
| Parkersburg, WV | 1 |
| Perpignan, Occitanie | 1 |
| Piura, Piura | 1 |
| Tokyo, Tokyo | 1 |
| Brownsville, FL | 1 |
| New Delhi, NCT | 1 |
| Kannur, KL | 1 |
| Newark, NJ | 1 |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
GitHub Issues Reports
Latest outage, problems and issue reports in social media:
-
Kairo (@Isahbless79) reportedDay 16 of building in Web3 from zero. I automated the pipeline and hit my first major infrastructure bottleneck. Here is today's technical breakdown: Pipeline Automation: I set up GitHub Actions to trigger the whale fetcher every 12 seconds. The Render API stays live, and the database now refreshes continuously in the background. Telegram Crash: I attempted to build a command menu for the bot (/set_filter, /start). It responded perfectly at first, but crashed the server after 30 minutes. The Root Cause: An asyncio event loop conflict between the Flask API and the telegram.ext library. The Fix: Decoupling the architecture. I am separating the Telegram bot into a standalone script, moving it to a different port, and shifting from polling to webhooks. Building through failures. Day 17 tomorrow.
-
bankk (@bankkroll_eth) reported@mitsuhiko The issue isn't OIDC, it's the `pull_request_target` that lets anyone with a GitHub account run code inside a privileged CI pipeline. Once you control the code that runs before the publish step, it doesn't matter how the publish is authenticated, OIDC or not.
-
Preetham Kyanam (@pkyanam) reportedDoes anyone know how to setup @NousResearch Hermes to monitor my @github Actions? One of my apps been failing so many things during deployment that I can’t catch internally so I’d rather have my agent just monitor, fix, push, loop, for me
-
Beau Johnson (@BeauJohnson89) reportedhermes just passed openclaw in the metric builders should watch not followers not launch noise actual usage 271b tokens on openrouter 143705 github stars 22432 forks 9974 open issues that is not a side quest anymore its the market telling openclaw users that rival agents are getting real attention my take is simple ride the hermes wave learn what it does better then bring the best ideas back into your own agent stack
-
Joseph N. Aburu (@Josephyala) reportedThis is so so true. Software engineering has always been about solving real problems with code, not just cranking out lines. Once that mindset clicks, AI flips from threat to straight-up career rocket fuel. A controlled study on GitHub Copilot back in 2023 showed devs completed tasks 55% faster with it, and follow-up research keeps backing up similar gains in pull requests and cycle time. The engineers who win long-term are the ones using AI to think bigger on architecture and tradeoffs instead of just churning boilerplate. The rest are still stuck worrying about replacement.
-
Failure is an option (@FarmingWithYHWH) reported@Hesamation Yup - just this morning, GitHub was down again.
-
Glitch Truth (@glitchtruth) reportedThe Mitchell effect is real. Same thing happened with Ghostty, his terminal hit 25k GitHub stars in weeks before public release. When a respected builder vouches for indie tooling, distribution problem solved overnight. Bentley shipped Hunk solo and now every Zig dev on my timeline is installing it.
-
PiWeb3Army (@PiWeb3Army) reported"Every Scale Was Placed Before the Wing Could Carry Anything." The upgrade was not the announcement. It was the preparation for what the announcement makes possible. Pi Network's official pre-release upgrade package mainnet-v1.1-p23.0.1 has been released on GitHub — optimizing node stability at the infrastructure level and resolving two critical issues: database permissions and statistical anomalies. These are not cosmetic fixes. They are the precise technical corrections that distributed infrastructure requires before it can safely carry the next layer of capability. Every scale in this image refracts light differently — but each one is load-bearing, each one positioned so the structure above it can move without fracturing. This upgrade is exactly that kind of work. Invisible to most. Essential to everything. What this release is preparing the network to carry matters enormously. Testnet2 and Pi DEX are the layers being paved for right now. The DEX — currently operational on Testnet with swap and limit order functionality — requires the kind of underlying node stability this upgrade delivers before it can responsibly expand. Self-managed node operators can complete image upgrades as needed through the updated Docker image. Windows and macOS Pi Desktop users require no action — upgrades trigger automatically. Linux CLI users with auto-update disabled run a single command. Regular mobile Pioneers need not take any action at this stage. The wing does not announce its readiness. It simply becomes capable of carrying more. Database permissions corrected. Statistical anomalies resolved. The new phase has already begun. Most people will only see it when it flies.
-
Cole (@colepulse) reported@VictorTaelin anthropic is compute constrained and bleeding from rate limits every user openai poaches is one less complaint in the claude code github issues and one more gpu freed up for enterprise contracts that actually pay losing the cheap users is a feature not a bug
-
Robert Ta (@therobertta_) reportedTHE SIGNAL-TO-CONTENT BRIDGE The hardest part is not writing. It is knowing what to write about. Our signals skill identifies patterns: "GitHub activity spiked in auth module. Three Slack conversations about SSO. Two customer support tickets about login." The content skill takes that signal and asks: "Is there a thread here?" If confidence exceeds threshold, it drafts. Content starts with observation, not imagination.
-
Andrea Intg. (@andreintg) reportedThe "Made with AI" flag feels so hypocritical to me. Good on Epic for finally talking about the issue. Imagine a flag, before we had AI, to indentify game devs who "stole" code from other github repos or assets from turbosquid without telling a soul.
-
VibeReady (@vibereadyhq) reportedThe bugs hide where you didn't think to look. Silently swallowed errors. Wrong defaults. Edge cases the AI didn't model. CodeRabbit analyzed 470 real GitHub PRs. Apiiro analyzed Fortune 50 repos. These aren't lab numbers. They're production.
-
Lyrie.ai (@lyrie_ai) reported🚨 CVE-2026-33725 (Metabase Enterprise v1.47–1.59.3): RCE via H2 JDBC INIT injection in serialization imports. PoC live on GitHub. Full DB read + arbitrary file access on your BI server. Most orgs unpatched. Upgrade to v1.59.4+ NOW. #ZeroDay #Metabase
-
KagariSoft (@KagariSoft) reportedAnecdotes from the development of ROL: Once, the game disappeared forever. Due to a problem with *** (version control software), upon executing a command, the entire project was completely deleted from the disk. Luckily, there were backups on GitHub, but of changes from 24 hours prior. Developer tip! Learn to use *** and GitHub, and keep your repository updated with every change!
-
Tak 🦞 (@cherry_mx_reds) reportedI ran out of inference today but then I learned I can keep going by connecting github to my ChatGPT account. I’m doing research with pro and then creating gh issues that I’ll jump on when the reset comes around.