GitHub status: access issues and outage reports
No problems detected
If you are having issues, please submit a report below.
GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.
Problems in the last 24 hours
The graph below depicts the number of GitHub reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.
At the moment, we haven't detected any problems at GitHub. Are you experiencing issues or an outage? Leave a message in the comments section!
Most Reported Problems
The following are the most recent problems reported by GitHub users through our website.
- Website Down (62%)
- Errors (21%)
- Sign in (18%)
Live Outage Map
The most recent GitHub outage reports came from the following cities:
| City | Problem Type | Report Time |
|---|---|---|
|
|
Sign in | 3 days ago |
|
|
Website Down | 3 days ago |
|
|
Website Down | 5 days ago |
|
|
Sign in | 6 days ago |
|
|
Website Down | 10 days ago |
|
|
Website Down | 10 days ago |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
GitHub Issues Reports
Latest outage, problems and issue reports in social media:
-
Alef Benson (@AlefBens) reported@_sirajuddeen_ @OfcMachete19 @iupdate I've been burnt too many times. Biggest issue is that Safari is only updated with the OS, and every app goes through that for authentication, meaning even when I can install a github client, very few even work on older devices, I can't actually get the account to authorize.
-
nadya (@sosidudku) reportedWe decided to benchmark Hermes Agent vs OpenClaw on a real task. ran both on local Qwen 3.6 35B. task: scrape GitHub star history for both tools, find what caused the growth spikes, build a live dashboard in the browser. OpenClaw: 203k tokens, 12m 01s — wrote a bash script Hermes: 257k tokens, 33m 01s — wrote a SKILL.md OpenClaw: hit GitHub API, got truncated responses, paginated through contributors, pulled star-history JSON, found a security incident in OpenClaw's history, fetched SVGs, fixed broken HTML from trimming, rewrote it clean. Hermes: parallel tool calls across GitHub API, web search, and browser. Hit Google rate limit, auto-switched to DuckDuckGo. Fetched article contents, mapped viral moments, then built the dashboard. Both shipped a live dashboard with star growth charts and spike annotations.
-
Scott Rudy (@scottrudy) reported@davidfowl I have GitHub Actions for Static Web Apps with .Net azure functions, but they refuse to update for .Net 10. Still stuck on 9 despite open issues.
-
Emmanuel Olajide (@E_m_m_a_ola) reportedBuild Bulletproof Runbooks & Playbooks Every alert should have a one-click “what to do” guide. Store them in GitHub + link directly in PagerDuty. No more 3 a.m. panic. Just follow the steps and fix it in minutes.
-
Jason (@jasonbunnell) reported@claudeai FEATURE REQUEST: if user finds an issue with Claude Code and Claude resolves (or not), you should auto increment GitHub issue to keep track of real user issues per issue to resolve as needed Had an issue with Claude Code for VS Code extension but noticed only 10 likes
-
NEET INTEL (@neetintel) reportedA post "decoding" X's new algorithm has gone viral. It tells you what's dead, what wins, and to screenshot it. X open-sourced the entire algorithm on GitHub, so I downloaded it and checked the claims against the real code. Most of it doesn't hold up. What the post got WRONG: → "Small accounts get a 3x boost from out-of-network reach." It's the opposite. One part of the code (a file called oon_scorer) exists purely to turn DOWN posts from people you don't follow. Its own comment says "prioritize in-network." The thread printed the algorithm backwards. → "Media gets 2x the weight." There's no 2x. The code just records whether a post has an image. It's a plain yes/no without any multiplier attached. → "Posting 4+ times a day triggers a penalty." There's a real rule that stops one person flooding your feed. But here's the deal: it only spaces out how often you show up in a single scroll. There's no daily count, and no number 4. That was invented. → "Closers like 'what do you think?' get you flagged." There is no engagement-bait detector anywhere in the code. → "Long 4,000-character posts get boosted." I searched the whole codebase for "4000." Nothing. What it got RIGHT (one thing): → Replies really are judged by WHO replies, not just how many. The code has a setting for whether a large account joined your thread. Credit where due. The irony? The repo ships a file that scores post quality. One thing it measures is literally called a "slop score" — X built a tool to detect low-effort filler. A recycled "what's dead / what wins" thread is exactly that. The takeaway? X's algorithm is public. Anyone can open it, but almost nobody does. Instead, they reshare a thread that summarized a blog that paraphrased a tweet. When a post hits you with confident numbers, ask the one question that matters: did they actually open the file?
-
Benitto J D (@BenittoJD) reportedGithub actions are down again
-
Java (@rishabhjava) reported@github How about the existing product stops going down first
-
Emil Privér (@emil_priver) reportedgithub actions is broken today
-
The Whizz AI (@TheWhizzAI) reported🚨Elon Musk just open-sourced the algorithm that controls what 600 million people see every day. Not a summary. Not a blog post. The actual production code. Live on GitHub right now. Facebook won't do this. TikTok guards it like a state secret. Instagram calls it proprietary. X just put it on the internet for free. This is the first time in history a major social platform has released its live, production-grade recommendation algorithm the same day it went live for users. Here's what's actually inside: →Home Mixer the orchestration layer that assembles your entire feed →Thunder stores and ranks every post from accounts you follow →Phoenix the Grok transformer that mines the entire global post library to find content you didn't know you wanted →Zero manual feature engineering Grok watches what you click, like, and dwell on. That IS the algorithm. →Updated every 4 weeks with full developer notes. Live. In public. Why did Musk do this? The EU fined X €120 million for transparency violations. France launched a separate investigation into algorithmic bias. Threads just overtook X in daily active users for the first time. And Musk said out loud on the day of release: "We know this algorithm is dumb and needs major improvements. But at least you can see us struggling to fix it in real time. No other social platform would dare do this." Here's the wildest part: You can now read exactly why your posts go viral. Or why they die at 12 impressions. No more guessing the algorithm. No more $500/mo "X growth" courses. No more "post at 9 AM on Tuesdays" nonsense. The answer is literally in the code. Apache 2.0 license. Full source. Updated monthly. The most transparent thing any social platform has ever done.
-
AtomicNodes (@AtomicNodes) reportedHermes Agent vs OpenClaw on Qwen 3.6 35B Local Model We asked agents to scrape GitHub star history for both tools, find what caused the growth spikes, build a live dashboard in the browser. MacBook Pro M5 Max 64Gb. OpenClaw: 203k tokens, 12m 01s - wrote a bash script Hermes: 257k tokens, 33m 01s - wrote a SKILL.md OpenClaw: hit GitHub API, got truncated responses, paginated through contributors, pulled star-history JSON, found a security incident in OpenClaw's history, fetched SVGs, fixed broken HTML from trimming, rewrote it clean. Hermes: parallel tool calls across GitHub API, web search, and browser. Hit Google rate limit, auto-switched to DuckDuckGo. Fetched article contents, mapped viral moments, then built the dashboard. Both shipped a live dashboard with star growth charts and spike annotations
-
Alex Maximiano (@FARTURATECH) reported@enunomaduro Hi Nuno, the problem wasn't with Laravel, but with GitHub, which wasn't able to download the dependencies. Sorry.
-
Arun Srivastava (@arunsrivastava_) reportedIt seems there is some issue in GitHub, actions are getting queued and not even getting cancelled @GitHubIndia #github #githubdeployment
-
Kristopher Betz (@kjbetz) reported@davidfowl I think I do... I push code to GitHub. Actions kick off, build new containers, build new migraines, then self hosted runners pick it up and run migrations, and auto update containers which pull down new images and restart containers.
-
฿Ø₮₴Ø₦Ɇ (@botsone) reportedI just downloaded my entire github and told hermes to extract the file, and upload every repo to my home *** server. It one-shotted it.
-
kkkfasya (@kkkfasya) reportedthey should hang every github engineer upside down and tickle them with feathers until they DIE
-
Dusko Licanin (@dulelicanin) reportedYour AI wastes 65% of its tokens saying "Sure, I'd be happy to help you with that." A 19-year-old developer made a markdown file that fixes this. It got 12,000 GitHub stars in 4 days. Here's what happened: Julius Brussee created "Caveman" — a Claude Code skill that forces AI to talk like a caveman. No articles. No filler. No pleasantries. Just the technical answer. Before (69 tokens): "The reason your React component is re-rendering is likely because you're creating a new object reference on each render cycle. I'd recommend using useMemo to memoize the object." After (19 tokens): "New object ref each render. Inline object prop = new ref = re-render. Wrap in useMemo." Same fix. 72% fewer tokens. But here's what most people miss: A March 2026 paper (arXiv:2604.00025) tested 31 LLMs across 1,485 problems and found something wild: Forcing large models to be brief improved accuracy by 26 percentage points. Bigger models literally perform WORSE because they over-elaborate. The researchers call it "scale-dependent verbosity" — the model rambles, and rambling introduces errors. Less words = more correct. Not a meme. Peer-reviewed science. The real cost math: → Anthropic charges 5x more for output tokens than input → 10,000 API calls/day at 150 tokens each = $8,212/year → With caveman compression = $2,847/year → Savings: $5,365/year per agent And here's the honest part most viral posts won't tell you: The 75% reduction only applies to isolated chat responses. In real coding sessions, independent benchmarks show 14-21% savings on output, and ~25% on total session tokens. Still meaningful. Still worth it. But not the headline number. The deeper insight? Chinese developers have had this advantage all along. Chinese has no articles, no verb conjugation, and each character carries more semantic weight. Chinese prompts naturally use 30-40% fewer tokens than English. Caveman mode is essentially porting the token-efficiency of Chinese into English. We spent billions training AI to be eloquent and polite. Now we're paying $15/million tokens for that politeness. The most sophisticated AI systems ever built — made to grunt through code reviews. That's the real story. Link to the repo and the research paper in comments. What's your take — does forcing brevity help or hurt AI reasoning?
-
Kunal Yeola (@yeolakunal) reportedAsked GitHub Copilot to fix ESLint issues and it added eslint-disable at the beginning of the file 😭
-
Quantum (@ItsMeQuantum) reported@emilios_eth They don't even know what syntax error is All they do is just Link LLM with GitHub and ask for a summary from it
-
masamune🌋 (@masamune_hybs) reportedThe real story behind $GITLAWB is that the product started moving before the price did. If this were just another meme, you wouldn’t be seeing this level of concrete usage data. OpenClaude: ・26.8k GitHub stars ・8.5k forks ・615 commits ・Gitlawb OpenGateway with MiMo added in v0.11.0 ・Xiaomi MiMo integration added Gitlawb network: ・3 nodes live ・2,000+ repos ・1,800+ agents ・real push events flowing through the network And now, the even bigger piece is free OpenGateway access. Since OpenClaude v0.11.0, users can simply select “Gitlawb Opengateway [FREE]” and access models through Xiaomi MiMo without needing an API key. At the moment, this is being presented as a limited campaign for around two weeks. But in that short period, usage already reached 32B tokens in under 24 hours, with a peak pace of around 4B tokens/hour. So this is not just hype because something is free. Builders are actually touching it, testing it, and starting to use it. That matters. Gitlawb is not “an app that uses AI.” It is infrastructure for AI to work. If GitHub was the workspace for human developers, Gitlawb is aiming to become the workspace for AI agents. As AI agents grow, they will need: Identity. Permissions. Repos. History. Signatures. Reviews. Persistent storage. Incentive design. Gitlawb is going straight into the middle of that stack. And on top of that, it has OpenClaude as the entry point. You can try it for free. Agents can write code. Agents can push to repos. Demos are shipping. External projects are starting to use it. Repos and agents are growing on the network. That flow has already started. And this is where $GITLAWB’s utility starts to matter. More AI agents. More repos. More pushes. More PRs and issues. More builders using the network. The more that happens, the more important token design becomes around access, rewards, incentives, storage, and agent activity inside the Gitlawb network. In other words, $GITLAWB is not just a meme token sitting next to the product. It has the potential to matter as network usage grows. Of course, it is still alpha. The node count is still small. Replication is still developing. OpenGateway free access is currently limited-time. Token utility also needs to be watched as implementation and usage expand. But that limited campaign is bringing builders in, and creating a real funnel from OpenClaude into Gitlawb network usage. That is the key. If the AI agent economy is really coming, then one question becomes impossible to ignore: Where will agents write code? Where will they own repos? Where will their contributions be proven? $GITLAWB already has: A working product. Early real usage numbers. A funnel bringing builders in. And a future network utility narrative. That’s what makes it interesting. Respect to @kevincodex and @gitlawb. They’re not just talking about the AI agent future. They’re shipping it. #AIagent #Web3 #Base
-
𝒹ℯ𝓁𝓁𝓎_𝓉𝒽ℯ_𝒹ℯ𝓈𝒾𝑔𝓃ℯ𝓇 (@dellyricch2) reportedElon says the latest 𝕏 algorithm has been published to GitHub Can someone please break it down for us
-
Mr. Buzzoni (@polydao) reportedMartin Keen from IBM just explained the debate that's splitting Claude and AI agent developers in half CLI vs MCP - and the answer will save you thousands of tokens > GitHub MCP server loads 80 tools into context = 55,000 tokens before your agent does anything > CLI: agent already knows grep, cat, *** cold from training data > MCP wins when Claude needs to render a JavaScript page - curl can't do that, MCP browser server can in 250 tokens > MCP wins for Slack, Notion, databases - OAuth handled by the server, not the agent the rule: use CLI when commands map directly to the job, use MCP when the abstraction earns its cost full breakdown above
-
Hedwigz (@itsamitush) reported@ZoharEiny E.g. github mcp for code,issues & coralogix mcp for logs & internal mcp for org structure
-
Sukhjit Singh (@thesukhjitbajwa) reportedPublished the campus website, learned about Next.js static content and export output, uploaded the files via FTP to the server, and now brainstorming ways to automate the process using either GitHub Actions or local scripts.
-
Prime 🏳️⚧️ (@Prim3st) reported@AAO23114 @SolaraProto Unfortunately that's probably not possible without a dedicated server... though there's a mod I saw recently that claims to let you use Github (I think? It was definitely using ***) to store/backup world saves. Maybe you could use something like that to have a shared world?
-
฿Ø₮₴Ø₦Ɇ (@botsone) reported@shub0414 I have a home *** server - I run gitea on my raspberry pi. It's really good. I actually just downloaded my entire github, told hermes to extract it and upload every repo to my home server, and it one-shot it in about 10 minutes using a local LLM.
-
Adi (@wtfaditya_) reportedAs github actions are down, I was thinking what if they were using actions itself to deploy their service, How they gonna fix it (deadlock) And if they are not using github actions then why should i use if they are not using itself @GitHub any help?
-
Jeff Hayes (@JD__Hayes) reported@FredKSchott I'm interested, but web page is down and could not find on github.
-
Mi.lu. (@ludwim_i) reportedSorry guys, here is a quick status update. I planned to release a bigger update for the Robot Skill Registry today, including the GitHub and Hugging Face integration. The idea is that you can connect your GitHub and Hugging Face accounts with the app. This should make it easier to search for things related to your robot setup, such as repositories, data models, policies, and other relevant resources. Unfortunately, the integration is not working reliably yet, so I need to do some more coding and testing. Because of that, I won’t release it today as planned. I’m sorry for the delay. Maybe I can release it after the weekend, but I don’t want to push something that is not ready yet. If anyone has feedback on whether this direction makes sense, I would really appreciate it.
-
Nick Farina (@nfarina) reportedMy "work" so far this morning has been: 📋 Taking product feedback from my in-app Agent and feeding it to Claude Code 🌐 Asking Copilot on Github to explain a recent change to an open-source lib and pasting its explanation for Codex to fix I'm basically middle management now.