GitHub Outage Map
The map below depicts the most recent cities worldwide where GitHub users have reported problems and outages. If you are having an issue with GitHub, make sure to submit a report below
The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.
GitHub users affected:
GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.
Most Affected Locations
Outage reports and issues in the past 15 days originated from:
| Location | Reports |
|---|---|
| Tlalpan, CDMX | 1 |
| Quilmes, BA | 1 |
| Bengaluru, KA | 1 |
| Yokohama, Kanagawa | 1 |
| Gustavo Adolfo Madero, CDMX | 1 |
| Nice, Provence-Alpes-Côte d'Azur | 1 |
| Brasília, DF | 1 |
| Montataire, Hauts-de-France | 3 |
| Colima, COL | 1 |
| Poblete, Castille-La Mancha | 1 |
| Ronda, Andalusia | 1 |
| Hernani, Basque Country | 1 |
| Tortosa, Catalonia | 1 |
| Culiacán, SIN | 1 |
| Haarlem, nh | 1 |
| Villemomble, Île-de-France | 1 |
| Bordeaux, Nouvelle-Aquitaine | 1 |
| Ingolstadt, Bavaria | 1 |
| Paris, Île-de-France | 1 |
| Berlin, Berlin | 2 |
| Dortmund, NRW | 1 |
| Davenport, IA | 1 |
| St Helens, England | 1 |
| Nové Strašecí, Central Bohemia | 1 |
| West Lake Sammamish, WA | 3 |
| Parkersburg, WV | 1 |
| Perpignan, Occitanie | 1 |
| Piura, Piura | 1 |
| Tokyo, Tokyo | 1 |
| Brownsville, FL | 1 |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
GitHub Issues Reports
Latest outage, problems and issue reports in social media:
-
Crypto Scores Rating (@CryptoScoresCom) reportedDid the team build before the money showed up? That's exactly what the "GitHub Before Crypto" metric tells you. It compares the first GitHub commit date to the token creation date. Positive number = code came first. Negative number = token came first. Ethereum: +589 days. Nearly two years of building with zero financial incentive. Solana: minus 63 days. Token launched before the repo even existed. Neither is an automatic verdict. But it tells you everything about priorities. CryptoScores just dropped a full tutorial breaking it down. Watch it now :
-
nadya (@sosidudku) reportedran Hermes Agent vs OpenClaw on local model Qwen 3.6 35B task: scrape GitHub star history, find what caused the growth spikes, build a live dashboard in the browser OpenClaw: 203k tokens, 12m 01s — wrote a bash script Hermes: 257k tokens, 33m 01s — wrote a SKILL.md OpenClaw: hit GitHub API, got truncated responses, paginated through contributors, pulled star-history JSON, found a security incident in OpenClaw's history, fetched SVGs, fixed broken HTML from trimming, rewrote it clean. Hermes: parallel tool calls across GitHub API, web search, and browser. Hit Google rate limit, auto-switched to DuckDuckGo. Fetched article contents, mapped viral moments, then built the dashboard. Both shipped a live dashboard with star growth charts and spike annotations.
-
Hermes-Agent News (@HermesAgentSol) reportedok wait garrytan/gbrain just crossed 15.9k stars on github. garry tan's personal hermes/openclaw agent brain. opinionated typescript and 2151 forks already. when a yc partner ships their own agent brain on your framework that's a real signal. also teknium landed the xai-oauth credential loop fix overnight. grok-4.3 now reports its real 1m context instead of 256k and the error message finally stopped blaming subscribers for being unsubscribed.
-
Aki Ranin (@aki_ranin) reportedNew Claude Code master prompt: "/goal assign next GitHub issue and start PR, iterate until no critical or high issues found with PR review skill"
-
Peter Steinberger 🦞 (@steipete) reported@yxcc Discord or GitHub Issues.
-
AtomicNodes (@AtomicNodes) reportedHermes Agent vs OpenClaw on Qwen 3.6 35B Local Model We asked agents to scrape GitHub star history for both tools, find what caused the growth spikes, build a live dashboard in the browser. MacBook Pro M5 Max 64Gb. OpenClaw: 203k tokens, 12m 01s - wrote a bash script Hermes: 257k tokens, 33m 01s - wrote a SKILL.md OpenClaw: hit GitHub API, got truncated responses, paginated through contributors, pulled star-history JSON, found a security incident in OpenClaw's history, fetched SVGs, fixed broken HTML from trimming, rewrote it clean. Hermes: parallel tool calls across GitHub API, web search, and browser. Hit Google rate limit, auto-switched to DuckDuckGo. Fetched article contents, mapped viral moments, then built the dashboard. Both shipped a live dashboard with star growth charts and spike annotations
-
Glitch Truth (@glitchtruth) reportedOpenAI just put a coding agent on your phone. Codex, the model that originally powered GitHub Copilot in 2021, now ships as a mobile-native agent. You prompt from your phone, it spins up a cloud sandbox, runs the task, writes the diff, and opens a PR against your GitHub repo. No laptop, no terminal. "Fix the auth bug" typed at lunch becomes a merge-ready PR by the time you pay the check. GitHub's own 2022 study showed Copilot users complete tasks 55% faster. That was an autocomplete assistant living inside an IDE. The mobile agent doesn't assist you. It does the whole task in a sandbox and hands you the PR. The defensible skill is no longer typing the syntax. It's knowing what to build, what to ship, and what to measure after. Action this week: pull Codex on iOS, point it at a real repo, ask it to fix something small. You'll either see the next decade of work, or you'll convince yourself it isn't real yet. Either is information you didn't have on Monday.
-
The Whizz AI (@TheWhizzAI) reported🚨Elon Musk just open-sourced the algorithm that controls what 600 million people see every day. Not a summary. Not a blog post. The actual production code. Live on GitHub right now. Facebook won't do this. TikTok guards it like a state secret. Instagram calls it proprietary. X just put it on the internet for free. This is the first time in history a major social platform has released its live, production-grade recommendation algorithm the same day it went live for users. Here's what's actually inside: →Home Mixer the orchestration layer that assembles your entire feed →Thunder stores and ranks every post from accounts you follow →Phoenix the Grok transformer that mines the entire global post library to find content you didn't know you wanted →Zero manual feature engineering Grok watches what you click, like, and dwell on. That IS the algorithm. →Updated every 4 weeks with full developer notes. Live. In public. Why did Musk do this? The EU fined X €120 million for transparency violations. France launched a separate investigation into algorithmic bias. Threads just overtook X in daily active users for the first time. And Musk said out loud on the day of release: "We know this algorithm is dumb and needs major improvements. But at least you can see us struggling to fix it in real time. No other social platform would dare do this." Here's the wildest part: You can now read exactly why your posts go viral. Or why they die at 12 impressions. No more guessing the algorithm. No more $500/mo "X growth" courses. No more "post at 9 AM on Tuesdays" nonsense. The answer is literally in the code. Apache 2.0 license. Full source. Updated monthly. The most transparent thing any social platform has ever done.
-
Danilo (@Daniel_adsss) reportedElon just dropped the entire X algorithm on GitHub and the code tells you exactly how to win the For You feed. Grok scores every post based on predicted engagement. Likes, replies, reposts all push you up. Blocks, mutes and reports drag you down. Which means every sharp comment you leave on a big account is training the algorithm to show more people like you that content. 16.5k stars in 24 hours. Developers already pulling it apart.
-
Prime 🏳️⚧️ (@Prim3st) reported@AAO23114 @SolaraProto Unfortunately that's probably not possible without a dedicated server... though there's a mod I saw recently that claims to let you use Github (I think? It was definitely using ***) to store/backup world saves. Maybe you could use something like that to have a shared world?
-
Emedy (@EmedyXBT) reported@Bybit_Official @BybitAfrica Local virtual dollar cards made perfect sense when you first discovered them. Naira cards were restricted from processing international transactions, which meant apps like Spotify, Amazon, Adobe, GoDaddy, GitHub, and so many others became unreachable. Local fintech apps launched USD-denominated virtual cards within minutes. The problem looked finished. So we used them, recommended them to friends, and kept using them.
-
Jeff Hayes (@JD__Hayes) reported@FredKSchott I'm interested, but web page is down and could not find on github.
-
Louis Gleeson (@aigleeson) reportedGrok runs the X algorithm. I just read the entire open-sourced codebase line by line. Here is exactly what makes a post go viral on X right now (save this): xAI quietly dropped the full For You algorithm on GitHub. 16,500 stars. Apache 2.0. Every Rust file, every Python script, every ranking signal. The first thing you need to understand is that there is no hand-engineered ranking anymore. None. xAI removed every single human-written rule from the system. The README states it directly. A Grok-based transformer does all the ranking now. That changes everything about how you should post. The transformer does not care about your follower count. It does not care about your blue check. It does not care about hashtags. It is looking at one thing. Your post's predicted engagement score across 15 specific actions. Here are the exact 15 actions the model is predicting for every post in your feed right now. Copied directly from the code: P(favorite). P(reply). P(repost). P(quote). P(click). P(profile_click). P(video_view). P(photo_expand). P(share). P(dwell). P(follow_author). P(not_interested). P(block_author). P(mute_author). P(report). The first eleven are positive. They push your post up. The last four are negative. They push it down. Your final score is the weighted sum of all fifteen. That is the formula. That is what every viral post is solving for whether the author knows it or not. Now look closer at the list. Eleven different ways to win. Most creators only optimize for likes and reposts. They are leaving nine signals on the table. The strongest signal in that list is dwell. Time spent on your post. The algorithm tracks how long someone stops scrolling to read what you wrote. A 400-word post that holds someone for 12 seconds beats a one-liner that gets 50 likes. The model has learned that dwell predicts every other engagement. This is why long posts are exploding right now. Not because X "promotes" them. Because they generate dwell, and dwell stacks on top of every other prediction the model is making. The second thing buried in the code that nobody is talking about is candidate sourcing. Your post enters the feed through two pipelines. Thunder serves your post to your followers. Phoenix serves your post to everyone else. Phoenix is the one that makes you go viral. Phoenix is a two-tower model. One tower encodes the user. The other tower encodes every post on the platform. It does similarity search using dot product matching against the global corpus. Then it pushes the top matches into feeds of people who have never followed you. This is exactly how a 12-follower account suddenly hits 800,000 views. Phoenix found a semantic match between the post and a user's engagement history, and the transformer scored it high on its 15 actions. Which means your post is not competing with your followers' posts. It is competing for embedding space. The way you win Phoenix is specificity. The two-tower model rewards posts that sit in a clear semantic neighborhood. Vague posts get vague embeddings and never get retrieved. Sharp posts about a specific topic with specific words get pulled into feeds of people obsessed with that topic. This is why "I built a SaaS" gets nothing and "I built a Postgres-to-Snowflake CDC pipeline in 4 hours using Estuary" goes viral. Same person. Same product. Completely different embedding. The third thing in the code is the Author Diversity Scorer. The model deliberately attenuates repeated author scores in the same feed. Translation: if your last three posts already got served to a user, the fourth post gets a penalty. This kills the "post 8 times a day for the algorithm" strategy. The algorithm is specifically engineered to dampen that. Better to post fewer times with stronger content than to flood and have your own posts compete with each other. The fourth thing is the filter list. Before any post gets scored, it has to pass through ten filters. The MutedKeywordFilter. The PreviouslySeenPostsFilter. The AuthorSocialgraphFilter. Plus a final VFFilter that removes anything classified as deleted, spam, violence, or gore. What kills your reach more than anything else is the PreviouslySeenPostsFilter. If a user has already seen your post once, you are filtered out completely from their feed. Forever. Which means every reply you make to a viral tweet that does not get visibility is permanently dead weight for that user. This is why the people who win at X reply only when their reply itself is good enough to be a standalone post. The last thing, and the one that should change how you write every single post: candidate isolation. During ranking, the transformer cannot let your post attend to other posts in the batch. It only attends to the user's engagement history. Your post is being scored alone. Against itself. Against what the user has previously engaged with. That is the entire game. Stop writing for the timeline. Write for the engagement history of the people you want to reach. Find the topics they already like, the accounts they already follow, the threads they already saved. Write into that semantic space. Phoenix will do the rest. The algorithm is no longer a mystery. It is sitting on GitHub at 16,500 stars. Apache 2.0. Anyone can read it. Almost nobody will. Link in comments.
-
฿Ø₮₴Ø₦Ɇ (@botsone) reported@shub0414 I have a home *** server - I run gitea on my raspberry pi. It's really good. I actually just downloaded my entire github, told hermes to extract it and upload every repo to my home server, and it one-shot it in about 10 minutes using a local LLM.
-
闇ぼの@𝕏級ピンク珍獣.rs (@kenbono13) reported@mcsetty @bee_fumo Most software that directly uses GitHub for the download, build, installation (NOT using the AUR) aren't installed under root in the first place. It's installed under $HOME so it wasn't an issue. AUR is the exception since it uses the Arch build system.