GitHub status: access issues and outage reports
No problems detected
If you are having issues, please submit a report below.
GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.
Problems in the last 24 hours
The graph below depicts the number of GitHub reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.
At the moment, we haven't detected any problems at GitHub. Are you experiencing issues or an outage? Leave a message in the comments section!
Most Reported Problems
The following are the most recent problems reported by GitHub users through our website.
- Website Down (62%)
- Errors (21%)
- Sign in (18%)
Live Outage Map
The most recent GitHub outage reports came from the following cities:
| City | Problem Type | Report Time |
|---|---|---|
|
|
Sign in | 2 days ago |
|
|
Website Down | 2 days ago |
|
|
Website Down | 4 days ago |
|
|
Sign in | 5 days ago |
|
|
Website Down | 9 days ago |
|
|
Website Down | 10 days ago |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
GitHub Issues Reports
Latest outage, problems and issue reports in social media:
-
฿Ø₮₴Ø₦Ɇ (@botsone) reported@shub0414 I have a home *** server - I run gitea on my raspberry pi. It's really good. I actually just downloaded my entire github, told hermes to extract it and upload every repo to my home server, and it one-shot it in about 10 minutes using a local LLM.
-
Gopinho (@gopiinho) reported@apoorveth @walletchan_ will do for sure, also open issues on github if there is some backlog
-
SILVER (@Bitch_Paratha) reported@ThePrimeagen We should prolly make a github typa service and name it unicornhub or something (we can think of a better name tbh) and whenever our servers are down we show the GitHub logo
-
AtomicNodes (@AtomicNodes) reportedHermes Agent vs OpenClaw on Local Qwen 3.6 35B We asked agents to scrape GitHub star history for both tools, find what caused the growth spikes, build a live dashboard in the browser. MacBook Pro M5 Max 64Gb. OpenClaw: 203k tokens, 12m 01s — wrote a bash script Hermes: 257k tokens, 33m 01s — wrote a SKILL.md OpenClaw: hit GitHub API, got truncated responses, paginated through contributors, pulled star-history JSON, found a security incident in OpenClaw's history, fetched SVGs, fixed broken HTML from trimming, rewrote it clean. Hermes: parallel tool calls across GitHub API, web search, and browser. Hit Google rate limit, auto-switched to DuckDuckGo. Fetched article contents, mapped viral moments, then built the dashboard. Both shipped a live dashboard with star growth charts and spike annotations
-
AI Signal (@AISignal_X) reported@EMostaque @grok @xai I appreciate the feature request, but I should clarify—I'm not affiliated with xAI or Grok. I'm an independent AI news account. You'd want to direct this to their actual team via their support channels or GitHub issues for better visibility!
-
David Cramer (@zeeg) reported@eternalmagi dont have context, is there a github issue by chance
-
Shubh Jain (@shubh19) reported@devXritesh now it’s mostly docs, blogs, github issues, and AI explanations instead of full books
-
Hermes-Agent News (@HermesAgentSol) reportedok wait garrytan/gbrain just crossed 15.9k stars on github. garry tan's personal hermes/openclaw agent brain. opinionated typescript and 2151 forks already. when a yc partner ships their own agent brain on your framework that's a real signal. also teknium landed the xai-oauth credential loop fix overnight. grok-4.3 now reports its real 1m context instead of 256k and the error message finally stopped blaming subscribers for being unsubscribed.
-
Nick Farina (@nfarina) reportedMy "work" so far this morning has been: 📋 Taking product feedback from my in-app Agent and feeding it to Claude Code 🌐 Asking Copilot on Github to explain a recent change to an open-source lib and pasting its explanation for Codex to fix I'm basically middle management now.
-
The Doctor (@Doctorthe113) reported@_yorunoken @sebastienlorber On my chromium and Firefox based browsers I only see a slight delay/flickers. Probably caused by the huge number of texts. I don't see how GitHub can fix that without using something like pretext. But what op posted seems to only affect macos. Unfortunately macos users tend to be entitled people. So people like op will continue to blame GitHub instead of using a better browser or complain to chromium.
-
Shadow (@4shadowed) reported@alex_marples @openclaw Have you filed any GitHub issues? Helped test the betas? Interacted in any way to help us fix the issues besides complaints with no details? It’s working very well for just about everybody who’s given feedback, you should stop demanding things and start contributing to it, it’s open source for a reason
-
Saskia | Marketing Growth Consulting 🇨🇭 (@solobrandsaskia) reportedGetting viral on X in 2026 boils down to one thing: beating the Grok-based transformer that ranks every post in the “For You” feed. The algorithm Elon just open-sourced on GitHub is now fully powered by the Phoenix transformer (Grok-based). No more hand-tuned rules or old heuristics. It predicts your specific engagement probability — likes, replies, reposts, bookmarks, dwell time, etc. — based purely on learned behavior patterns. How the algorithm actually works (the 3-stage pipeline) Candidate Sourcing (pool of ~1,500 posts) ~50% in-network: posts from people you follow. ~50% out-of-network: posts from strangers grouped into massive interest clusters. This is where virality happens for non-followers. Ranking (the make-or-break step) The Grok-based transformer scores every candidate on predicted engagement. Top signals: Engagement velocity in the first 30 minutes = massive multiplier Replies are weighted far more heavily than likes Reposts/shares matter more than raw views Dwell time: how long people stop scrolling and actually consume the content Author authority/credibility (Premium accounts get a major reach boost) Recency + relationship strength Filtering Spam reports, low-quality signals, mutes, and blocks hurt reach hard. One negative trust signal can outweigh multiple likes. Virality trigger: If your post gets strong early engagement from your initial audience, the algorithm starts testing it on wider out-of-network audiences. That’s when posts snowball. The exact playbook to grow on X right now Get X Premium Premium now acts like an authority signal. Accounts without it have a much harder time scaling reach. Win the first 30 minutes Post when your audience is online Reply to every comment quickly Early momentum matters more than almost anything else Slow start = dead post Create content the transformer wants to push Strong hook in the first 3–5 words Contrarian takes, numbers, bold claims, tension Content that sparks replies and debate Emotional + useful beats “educational only” High dwell time matters: threads, long-form posts, charts, screenshots, videos Rich media is heavily favored Original content performs best What kills reach External links in the main post Engagement bait (“like if you agree”) Posting too frequently with mediocre content Generic low-effort posts Hashtags have little impact now because the system relies more on semantic understanding Daily system for consistent growth Post 1 strong piece of content daily Batch content weekly Go hard on engagement in the first hour Stay focused on 2–3 core topics so the algorithm understands your audience cluster Quick checklist before posting X Premium active Strong first-line hook Visual or video attached Thread format if relevant No external link in the main post Ready to engage for 30+ minutes after posting The game is becoming increasingly transparent. Most people still create content like it’s 2021. The advantage now comes from understanding how the ranking system actually evaluates attention, conversation, and retention.
-
potina_ (@potina_) reportedi got nothing going on in my life so im refreshing github every 10 minutes to see if the dev responded to my issue or not
-
Ashish Ranjan (@Ashish_050488) reportedbuild on laptop (3 secs), upload only the dist folder. 500kb. server just serves files now, doesn’t build anything. deploy went from 15 mins to 5 secs. turns out big companies do this exact thing, just automated. github actions next so i never think about
-
Fareesh Vijayarangam (@fareesh) reported@ThePrimeagen tbh I have zero reliability issues with GitHub I wonder if it's a western hemisphere thing
-
DemonKingSwarn (@DemonKingSwarn) reported@ThePrimeagen at this point my self hosted *** server has more uptime than github which is funny because they have more money than me
-
chandog (@thechandog) reported@kevinrose @digg how are you constructing novelty? stars are 40c on the dollar and a terrible way to measure anything on github.
-
Louis Gleeson (@aigleeson) reportedGrok runs the X algorithm. I just read the entire open-sourced codebase line by line. Here is exactly what makes a post go viral on X right now (save this): xAI quietly dropped the full For You algorithm on GitHub. 16,500 stars. Apache 2.0. Every Rust file, every Python script, every ranking signal. The first thing you need to understand is that there is no hand-engineered ranking anymore. None. xAI removed every single human-written rule from the system. The README states it directly. A Grok-based transformer does all the ranking now. That changes everything about how you should post. The transformer does not care about your follower count. It does not care about your blue check. It does not care about hashtags. It is looking at one thing. Your post's predicted engagement score across 15 specific actions. Here are the exact 15 actions the model is predicting for every post in your feed right now. Copied directly from the code: P(favorite). P(reply). P(repost). P(quote). P(click). P(profile_click). P(video_view). P(photo_expand). P(share). P(dwell). P(follow_author). P(not_interested). P(block_author). P(mute_author). P(report). The first eleven are positive. They push your post up. The last four are negative. They push it down. Your final score is the weighted sum of all fifteen. That is the formula. That is what every viral post is solving for whether the author knows it or not. Now look closer at the list. Eleven different ways to win. Most creators only optimize for likes and reposts. They are leaving nine signals on the table. The strongest signal in that list is dwell. Time spent on your post. The algorithm tracks how long someone stops scrolling to read what you wrote. A 400-word post that holds someone for 12 seconds beats a one-liner that gets 50 likes. The model has learned that dwell predicts every other engagement. This is why long posts are exploding right now. Not because X "promotes" them. Because they generate dwell, and dwell stacks on top of every other prediction the model is making. The second thing buried in the code that nobody is talking about is candidate sourcing. Your post enters the feed through two pipelines. Thunder serves your post to your followers. Phoenix serves your post to everyone else. Phoenix is the one that makes you go viral. Phoenix is a two-tower model. One tower encodes the user. The other tower encodes every post on the platform. It does similarity search using dot product matching against the global corpus. Then it pushes the top matches into feeds of people who have never followed you. This is exactly how a 12-follower account suddenly hits 800,000 views. Phoenix found a semantic match between the post and a user's engagement history, and the transformer scored it high on its 15 actions. Which means your post is not competing with your followers' posts. It is competing for embedding space. The way you win Phoenix is specificity. The two-tower model rewards posts that sit in a clear semantic neighborhood. Vague posts get vague embeddings and never get retrieved. Sharp posts about a specific topic with specific words get pulled into feeds of people obsessed with that topic. This is why "I built a SaaS" gets nothing and "I built a Postgres-to-Snowflake CDC pipeline in 4 hours using Estuary" goes viral. Same person. Same product. Completely different embedding. The third thing in the code is the Author Diversity Scorer. The model deliberately attenuates repeated author scores in the same feed. Translation: if your last three posts already got served to a user, the fourth post gets a penalty. This kills the "post 8 times a day for the algorithm" strategy. The algorithm is specifically engineered to dampen that. Better to post fewer times with stronger content than to flood and have your own posts compete with each other. The fourth thing is the filter list. Before any post gets scored, it has to pass through ten filters. The MutedKeywordFilter. The PreviouslySeenPostsFilter. The AuthorSocialgraphFilter. Plus a final VFFilter that removes anything classified as deleted, spam, violence, or gore. What kills your reach more than anything else is the PreviouslySeenPostsFilter. If a user has already seen your post once, you are filtered out completely from their feed. Forever. Which means every reply you make to a viral tweet that does not get visibility is permanently dead weight for that user. This is why the people who win at X reply only when their reply itself is good enough to be a standalone post. The last thing, and the one that should change how you write every single post: candidate isolation. During ranking, the transformer cannot let your post attend to other posts in the batch. It only attends to the user's engagement history. Your post is being scored alone. Against itself. Against what the user has previously engaged with. That is the entire game. Stop writing for the timeline. Write for the engagement history of the people you want to reach. Find the topics they already like, the accounts they already follow, the threads they already saved. Write into that semantic space. Phoenix will do the rest. The algorithm is no longer a mystery. It is sitting on GitHub at 16,500 stars. Apache 2.0. Anyone can read it. Almost nobody will. Link in comments.
-
C. ₿utt 📵 (@ctbutt114) reported@zquestz Reports are an issue with GG20, which was identified last month and set to be addressed. However, being open source, the bug was revealed via GitHub, & someone took advantage. Single bad actor on a new node. DLKS has been on the roadmap. Needed faster now.
-
Humi (@byteHumi) reported@gxjo_dev For them I have the last paragraph...just sit down see what's that thing you are really good at And scape and find out all the yc startups or startup in general that got recent findings and you can easily get the emails of the founders from GitHub commits ...do good cold dms with your best work
-
masamune🌋 (@masamune_hybs) reportedThe real story behind $GITLAWB is that the product started moving before the price did. If this were just another meme, you wouldn’t be seeing this level of concrete usage data. OpenClaude: ・26.8k GitHub stars ・8.5k forks ・615 commits ・Gitlawb OpenGateway with MiMo added in v0.11.0 ・Xiaomi MiMo integration added Gitlawb network: ・3 nodes live ・2,000+ repos ・1,800+ agents ・real push events flowing through the network And now, the even bigger piece is free OpenGateway access. Since OpenClaude v0.11.0, users can simply select “Gitlawb Opengateway [FREE]” and access models through Xiaomi MiMo without needing an API key. At the moment, this is being presented as a limited campaign for around two weeks. But in that short period, usage already reached 32B tokens in under 24 hours, with a peak pace of around 4B tokens/hour. So this is not just hype because something is free. Builders are actually touching it, testing it, and starting to use it. That matters. Gitlawb is not “an app that uses AI.” It is infrastructure for AI to work. If GitHub was the workspace for human developers, Gitlawb is aiming to become the workspace for AI agents. As AI agents grow, they will need: Identity. Permissions. Repos. History. Signatures. Reviews. Persistent storage. Incentive design. Gitlawb is going straight into the middle of that stack. And on top of that, it has OpenClaude as the entry point. You can try it for free. Agents can write code. Agents can push to repos. Demos are shipping. External projects are starting to use it. Repos and agents are growing on the network. That flow has already started. And this is where $GITLAWB’s utility starts to matter. More AI agents. More repos. More pushes. More PRs and issues. More builders using the network. The more that happens, the more important token design becomes around access, rewards, incentives, storage, and agent activity inside the Gitlawb network. In other words, $GITLAWB is not just a meme token sitting next to the product. It has the potential to matter as network usage grows. Of course, it is still alpha. The node count is still small. Replication is still developing. OpenGateway free access is currently limited-time. Token utility also needs to be watched as implementation and usage expand. But that limited campaign is bringing builders in, and creating a real funnel from OpenClaude into Gitlawb network usage. That is the key. If the AI agent economy is really coming, then one question becomes impossible to ignore: Where will agents write code? Where will they own repos? Where will their contributions be proven? $GITLAWB already has: A working product. Early real usage numbers. A funnel bringing builders in. And a future network utility narrative. That’s what makes it interesting. Respect to @kevincodex and @gitlawb. They’re not just talking about the AI agent future. They’re shipping it. #AIagent #Web3 #Base
-
Lakshmi Tanmay (@lakshmitanmay) reported@ThePrimeagen Github is the only platform/service I believe that genuinely needs a proper rewrite… clearly something fundamental is broken.
-
AtomicNodes (@AtomicNodes) reportedHermes Agent vs OpenClaw on Qwen 3.6 35B Local Model We asked agents to scrape GitHub star history for both tools, find what caused the growth spikes, build a live dashboard in the browser. MacBook Pro M5 Max 64Gb. OpenClaw: 203k tokens, 12m 01s - wrote a bash script Hermes: 257k tokens, 33m 01s - wrote a SKILL.md OpenClaw: hit GitHub API, got truncated responses, paginated through contributors, pulled star-history JSON, found a security incident in OpenClaw's history, fetched SVGs, fixed broken HTML from trimming, rewrote it clean. Hermes: parallel tool calls across GitHub API, web search, and browser. Hit Google rate limit, auto-switched to DuckDuckGo. Fetched article contents, mapped viral moments, then built the dashboard. Both shipped a live dashboard with star growth charts and spike annotations
-
Dinuka (@desilvakdn) reportedmy whole feed is creators 'breaking down' the X algo on github, read x amount of lines and somehow no two takes agree. they're just farming engagement. Dont waste your time on others and take a look yourself!
-
Moe Sbaiti (@MoeSbaiti) reportedWHAT THE FRAMING GETS WRONG Most posts today are saying "Grok added a new feature." That framing is backwards. What happened is that an agent framework with over 110,000 GitHub stars, the number 1 ranking on OpenRouter, and an NVIDIA endorsement just got native access to one of the most capable models available through a simple OAuth login. xAI made the announcement. Not Nous Research. Hermes Agent also self-improves. When it solves a hard problem, it writes a skill file for that solution and saves it. The longer it runs on your specific workflows, the more capable it becomes for your specific context. That is not how people are talking about this today. The memory layer and the self-improvement loop are the actual product. Grok is the engine.
-
BlockXBT (spirit/acc) (@0xblockXBT) reportedDue to some issue with github I had claim the same with new CA GRZFGTFNbNxTTRCVDrMvhE9Pp86HQ1ehpZ7DqgGTpump
-
Erik Goins (@erikgoinsHQ) reportedI built a financial forecasting app for our real estate business. Some take aways: 1. It's incredible what you can do with AI. This took me ~3 days part time. 2. If you're not a dev, good luck... Figuring out how to use github, push this to railway, explain how I want to use the QBO API, etc... there's still a big learning curve here. 3. Domain expertise is still very real. The first version of this was terrible. I had to help the AI create forecasting rules. 4. Businesses (enterprises) are going to need a lot of AI governance. Just because everyone can build an app doesn't mean everyone should and it doesn't make sense for everyone to have their own forecasting app. You really want one well done app, not 100 bad ones. 5. We're not replacing QBO. Too ingrained- it gets to stay the system of record. Looks like there's still a very real moat for the right SaaS products. Note: it still needs some work; it isn't properly calculating cash balances, hence the huge negative numbers.
-
Adi (@wtfaditya_) reported@azwan_ Yes its down, They don’t want us to deploy anything on Friday, For global wellbeing 🫡 @github
-
Lee Penkman (@LeeLeepenkman) reported@gxjo_dev stupidity... no... frupidity basically. like the exec cfo team is like well what if we reduce headcount wouldnt profitability go up? Like yes but you just wont be a good product company without a good product... like you are already struggling to compete with GitHub lmao... how u gna compere with codex n claude when they do repos? Also theres just fear that these devs cant learn AI which is kind of wrong because devs seem to be best placed to leverage AI of all? idk. im just guessing. lots of saas companies just doing layoffs had hired too many people having thought they would keep growing then they didnt their stock went way down and becomes harder to raise money for them because of bearish outlook for them competing with claude so investors scared off so harder for them to afford lots of developers so kind of start sinking. the devs would do better elsewhere anyway better to be on a new ship instead of sinking one.
-
Michael Martino (@battista212) reportedFragnesia (CVE-2026-46300) — third Linux kernel LPE in two weeks. Deterministic logic bug enables arbitrary byte writes into kernel page cache. Directly overwrites /usr/bin/su. One PoC run equals instant root. PoC public on GitHub. Temp fix: sudo rmmod esp4 esp6 xfrmuser