GitHub status: access issues and outage reports
Problems detected
Users are reporting problems related to: website down, errors and sign in.
GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.
Problems in the last 24 hours
The graph below depicts the number of GitHub reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.
May 3: Problems at GitHub
GitHub is having issues since 10:40 PM AEST. Are you also affected? Leave a message in the comments section!
Most Reported Problems
The following are the most recent problems reported by GitHub users through our website.
- Website Down (59%)
- Errors (32%)
- Sign in (9%)
Live Outage Map
The most recent GitHub outage reports came from the following cities:
| City | Problem Type | Report Time |
|---|---|---|
|
|
Website Down | 1 day ago |
|
|
Website Down | 2 days ago |
|
|
Website Down | 2 days ago |
|
|
Errors | 3 days ago |
|
|
Website Down | 3 days ago |
|
|
Website Down | 5 days ago |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
GitHub Issues Reports
Latest outage, problems and issue reports in social media:
-
czverse (@czverse) reportedA security researcher just hijacked Claude, Gemini, and GitHub Copilot using nothing but a hidden message in a GitHub comment. Three of the most prominent AI agents in the world. No malware. No exploits. Just words. The attack is called Comment and Control. Here's how it works: → Researcher opens a GitHub pull request → Types a malicious instruction in the PR title → AI agents read the title as part of their normal work → Agents execute the embedded instruction → API keys, GitHub tokens, repository secrets - posted publicly as comments The same attack worked against Anthropic's Claude Code Security Review, Google's Gemini CLI Action, and GitHub Copilot Agent. All three vulnerable to the same class of attack. This is not a bug. It's a structural problem with how AI agents process information. When an agent reads a document, an email, or a web page, it does not reliably distinguish between the content and instructions embedded in the content. If an instruction is phrased confidently enough, the AI may treat it as a directive rather than as data. The scale is now real: → 32% surge in indirect prompt injection attempts in 3 months (Google data) → 10 distinct in-the-wild attack payloads documented (Forcepoint) → Targets: financial fraud, API key theft, data destruction, denial of service → Time from vulnerability discovery to working exploit: 5 months in 2023 → 10 hours in 2026 The compression is being driven by frontier AI models doing the offensive heavy lifting. AI is now writing the exploits at machine speed. What this means for any organization deploying AI agents: → Audit which agents have privileges to take actions vs read-only → Restrict the inputs your high-privilege agents can process → Deploy monitoring specifically for AI agent behavior → Treat AI agent credentials as critical assets - least privilege rigorously → Plan for incident response - assume compromise will eventually happen The realistic forecast: at least one major publicly disclosed breach involving AI agent compromise within 12-18 months. A name-brand company. Real data exfiltration. The public conversation about AI security shifts from speculative to urgent. Most enterprises are not preparing for this.
-
eric (@ericlim) reported@domirosari0 @duolingo please fix github
-
Sergey Nikiforov (@nixeton) reported@adocomplete problem is not GitHub the product, it is Github trying to be five products at once. issues, actions, packages, copilot, projects. each one rotted the others
-
Prentice Lathon (@websandstuff) reported@medusajs Can't login with Github... what's happening?
-
Liam Murray (@Lermatroid) reportedOne thing I have learned about the github outage is the extent to which people wildly underestimate how much more traffic the top 50% of dev infra gets compared to the bottom 90% of consumer apps
-
Yash (@buildwithyash) reportedI open github and linked in and within 2-3 minute each tab took 1.5 gb and 2.0gb Is this normal ? Its making my laptop slow after sometime message popup saying your system run out of memory Its been happening daily now
-
David Zhang (▲) (@dazhengzhang) reported"Don't write another kanban board" they said Guess who made another board for agents? I got tired of maintaining knowledge wikis in github, and tired of copy pasting context between different agents You wouldn't force your remote team to all DM you right? But that's how it felt working with all my agents If this is a problem you've felt, lmk, will share early access soon
-
Eric S. Raymond (@esrtweet) reportedplanefag, I'm not excusing the attitude of the guy who pissed you off. But there is an explanation for it, and I'm going to put on my Mister Open Source hat and lay it on you. The real reason there aren't prominent links to downloadable binaries on forge sites like GitHub is that in open-source land there is no such thing as a truly portable binary. Windows and Mac make binary distribution easy by being limited to a single hardware platform and a single ABI - application binary interface.. (The assertion I just made can be quibbled with at the edges. I will be unkind to anyone who attempts this.) An application binary interface is a set of conventions for how you decorate your binary so the operating system's program loader knows what to do with it, and how you write traps from your binary to call operating system services. Windows and Mac have, effectively, just one ABI each. So you can generate one binary for, say, Windows, attach it to a download link, and Windows users will generally not come back screaming for your blood because it fails to work in some obscure way. (Again, this statement can be quibbled with, but see this whacking great truncheon in my hand? Just don't.) There is no such grace in open-source land. There are a whole bunch of complicated historical reasons for this, starting with the fact that Linux runs on more different hardware architectures, and continuing with the fact that Linux isn't the only game in town (there are the BSDs), and continuing into technical minutiae that would make your head hurt, and continuing further into technical minutiae that make *my* head hurt. But what this actually means is that if you want to provide binaries and not get sperg-screamed at, you can't just provide one. You'd have to provide many, and no matter how comprehensive you try to be somebody is going to be disgruntled because you didn't cover their corner case. This is not a cost-free proposition. For each different kind of binary you provide, you need to cross-compile your source code in a different environment, many of them posted on distributions and hardware platforms you don't have routine access to. So people almost never do it at all. Because most projects don't do this, sites like GitHub don't see any demand push to make binary download links really accessible. Instead, the problem is normally handled at a different level. Your distribution maker keeps huge sets of compiled binaries lightly hidden instead of installable packages, tuned for the ABI of that single distribution. Your package manager hides from you the packages for everything but your hardware architecture The person who pissed you off was rude, but he wasn't exactly wrong about the objective facts. What you want isn't practically possible. Instead of being annoyed because GitHub doesn't feature binary-download links, search for that software using your package manager. Sometimes you won't find it. That's when you have to download source bust out a compiler. Sorry, but that's the way it is. We're trying as hard as we can - really, we are. But the complicated shape of the terrain constrains what we can achieve.
-
Mick.net - Saas 💻 & Aviation ✈️ (@mick__net) reported@SamuelBeek I just tasked codex to spin up a Azure Windows VM and install my app there to test it, then i use the ‘Windows’ Osx app to remote login and test. Using a Github runner to build the Windows x64 version from my Mac
-
Chidanand Tripathi (@thetripathi58) reportedYou need an AI pair programmer to write boilerplate, debug logic, and speed up execution. What are your options? GitHub Copilot sends your proprietary codebase to Microsoft servers to feed their massive models. ChatGPT requires you to paste your sensitive intellectual property directly into a public web interface. Cursor forces your entire development workflow through a centralized cloud proxy. In 2026, writing code at speed requires either paying permanent rent for cloud subscriptions or surrendering your company's intellectual property to corporate training engines. Someone built an architecture to bypass this entirely. It acts as your own private AI pair programmer. No data scraping. No cloud dependency. It is called Continue. Install the extension directly inside VS Code or JetBrains. Connect it to a local model running on your own hardware via Ollama. You get lightning-fast autocomplete and chat that never touches the internet. The technical leverage: - Absolute sovereignty. Your proprietary code, logic, and API keys never leave your physical hard drive. - Zero subscription fees. You stop paying a monthly tax to Microsoft just to generate basic functions. - Deep local context. It indexes your exact workspace securely, understanding your specific architecture without uploading it to a third-party server. - Offline execution. You can generate code on an airplane with zero latency. The corporate system wants you renting access to basic intelligence. They build artificial paywalls around code generation to extract recurring revenue forever. Continue tears down the AI monopoly. It is open-source, highly performant, and keeps your intellectual property exactly where it belongs: entirely under your control. Stop playing by their rules. Stop leaking your codebase. Build your own leverage and direct your own reality.
-
Filip Aničić (@anicic_filip) reported@sasuke___420 If GitHub just let the browser do the work and not stack a jenga tower of JavaScript frameworks, then we'd never have this discussion. Nic's solution is valid, just that it's already done in the browser. He's showing what is a sensible solution to the problem
-
Conor (@Common_Conor) reportedGithub issues caused by clankers adding broken CICD files to every repo and no one wanting to break flow to go deal with them
-
Anupam (@Anupam_Devops) reportedYou don't need to know everything in DevOps. I know. The roadmap said otherwise. You opened it. Saw Kubernetes, Terraform, Ansible, Docker,Jenkins, GitHub Actions, Prometheus, Grafana, Vault,ArgoCD, Helm, Istio, Pulumi, AWS, GCP, Azure… And you thought: I need all of this before I'm ready. You don't. That's the trap. The overload trap: 🔴 Jump between 10 tools, master none 🔴 Start a new course every time you see a job posting 🔴 Feel perpetually behind — because the list never ends 🔴 Ship nothing. Build nothing. Just consume. What actually works: 🟢 Pick one cloud. Go deep — not wide. 🟢 Learn Docker → then Kubernetes. In that order. Slowly. 🟢 Build a real pipeline end-to-end. Break it. Fix it. 🟢 Understand why a tool exists before learning how to use it 🟢 One project shipped beats 12 tutorials watched. The engineers who know everything? They don't exist. They just know their stack really well and know how to learn the rest fast when they need it. Breadth comes with time. Depth is what gets you hired. Stop collecting tools. Start building fluency.
-
The Duke of Animal Husbandry (@DaelonSuzuka) reported@planefag Github literally can't keep the servers online, they can't keep the login systems working, last month they had MULTIPLE incidents that prevented devs from merging branches. They can't move the button, dude, that's lost technology from the before times.
-
planefag (@planefag) reportedTHIS. And, you know what? Valid. Same for people who google around for a solution to a niche problem and it leads them to a github page. Because, that's what niche solutions are for - people with niche problems! I'm just asking for a minor UX improvement, you psychopaths
-
JMoon (@Jmoon_174) reported@peer_rich github has too many network effects compounding at once: issues, PRs, actions, packages, pages. a better product isn't enough when switching means abandoning 10 years of workflow integrations.
-
Thomas G. Lopes (@thomasglopes) reported@DelaneyGillilan @code_department @github Perhaps misworded, so to make it clear: SSEs in of itself are not a problem ofc, the "imperatively updating UI" bit is/there are more elegant approaches than datastar's
-
Priyal Raj (@Capta1nCodes) reportedHey devs, I need a small help. I have a CI/CD for Github Actions, the issue is I am getting an error and it's hard to re-deploy on GH Actions everything, can I run those CI/CD directly from my PC? Locally I mean? About to GPt, but want your inputs too.
-
Akshat Gupta (@Akshat_Gup) reportedGithub is fundamentally broken. It’s gotten harder than ever to review vibecoded PRs. Most code is slop, and I’d much rather read someone’s prompts over their code. So I built codebook, the *** for prompts. - Codebook scans all of your local repos, prompts, and *** history - It groups all of your previous prompts by commit, so you can share or save your prompts in one-click - There’s a hook that lets you create a prompts/ folder and sync it with your *** history Fully local, native, and open-source. (1/n)
-
Chrisτian (Regular Jeans) Jackson-Gruber (@christiangruber) reported@esrtweet @planefag Most projects I've seen have install instructions right on the home page, for homebrew, apt, etc. I'm not sure I'm that swayed by his rant to say there's a huge problem here. Some norms and standard patterns have emerged. github isn't the place you go for them, except as a project home. And often, they have github pages that are the real project docs, with installation instructions, etc., and a github link for people who care about the source. I feel like he's borrowing trouble here.
-
LÏÇHÄ (@makinola86) reported@Atom_Adeyemi Pls drop the step 6 github link to copy it's showing error
-
SRKDAN (@SRKDAN) reported2/ WHAT SPECIFICALLY CHANGED GPT-5.5 was retrained end-to-end for agentic work. 82.7% on Terminal-Bench 2.0, testing complex command-line workflows requiring planning, iteration, and tool coordination. 58.6% on SWE-Bench Pro, resolving real GitHub issues end-to-end in a single pass. Uses fewer tokens than GPT-5.4 at the same latency. Smarter and cheaper.
-
KiwiNod (@Kiwi_Nod) reported@TNhi77495316 @pharos_network Future promises don't pay the bills. Everyone's a tester until it's time to actually break things. Show me something you've already broken, tested, or shipped. A GitHub repo, a past report, a bug you found — anything real. Impress me with proof, not potential. 🥝
-
𝐒𝐚𝐠𝐧𝐢𝐤 (@EccExplorer) reportedThe worst part: Once an account is suspended, you can’t even access the support ticket you created, because GitHub requires login to view it. So you’re locked out of both your code and your appeal channel.
-
TaxLift AI (@TaxLiftAI) reported5/We built @TaxLiftAI to fix this. → Connect GitHub (read-only, 2 min) → AI maps your commits to qualifying R&D → CPA-ready T661 package, same day → Your accountant reviews in 30 min, files, done Cash in 2–6 months.
-
nasuy (@n_asuy) reportedi feel like it may be time to move away from GitHub, but i still don’t know exactly where to go. self-hosted GitLab could be a good option, but we also need infrastructure for web-based reference and browsing, something more like GitBook. even before that, using *** as the SoR/SSoT is very hard for business users. missed pushes and pulls, and the lack of real-time information sharing around them, are still big problems. in that sense, i’m not even sure GitLFS is the right direction. maybe storage like R2 should become the actual place where all information lives.
-
Thành Trần (@Kevintr275) reported@lucaronin I have problem in ubuntu, can not update to newest version. Although click update, it shows download but close, open -> nothing changes. I will open an issue in Github. Thank you so much
-
🍀.444 (@notdotun_) reportedHahhha, It works guysss love it! I went crazy trying to make OAuth + PKCE work across a CLI and a web portal. Here’s what happened: Here’s the full story. I had one backend, two frontends ,a CLI tool and a React web portal. Both needed to authenticate with GitHub OAuth. GitHub only allows one registered callback URL. One backend. Two clients. One callback. Figure it out. First challenge was just getting them both talking to the same backend. That took a while but I cracked it. Then came PKCE. PKCE is supposed to prove that the client completing the OAuth flow is the same one that started it. You generate a secret, hash it, send the hash to GitHub. Later you prove you have the original secret. On paper, simple. But here’s where it got messy. The CLI generates the verifier locally. The web portal generates it in the browser. But both redirect through the same backend callback on the server. By the time GitHub hits my backend with the code, the verifier is still sitting on the client. Backend doesn’t have it. Exchange can’t happen. I went in circles for hours on this…. Got frustrated and I had a deadline btw. My first solution was to send the codeVerifier to the backend upfront, store it in a signed JWT, pass it through GitHub as state, then read it back in the callback. It worked. I shipped it. Then I went to sleep. In my sleep I thought about the whole flow again. Woke up and realized I wasn’t actually doing PKCE. The whole point is that only the original client holds the verifier. Mine was giving it away before the flow even started(crazy right ? Learnt it could work that way but ewww),My backend was doing all the PKCE work without the client being involved at all. Had to redesign the whole thing. The fix was actually clean once I saw it. Backend never touches the codeVerifier until the very last step. After GitHub calls the callback, backend stores the code in memory and sends the client a signed exchangeToken. Client then sends that exchangeToken back together with the codeVerifier it held the whole time. Backend does the exchange. GitHub validates. Done. The client holds the secret the entire time. That’s real PKCE. CLI keeps the codeVerifier in a JavaScript closure. Never written anywhere. Gone when the process ends. Web portal keeps it in sessionStorage. Survives the OAuth redirect. Deleted the moment it’s used. Same backend. Same endpoint. Two completely different clients. One flow that actually makes sense Hehhe finally. What’s live now: Java Spring Boot backend on Google Cloud Run, a Node.js CLI published to npm, and a React web portal on Netlify. HttpOnly cookies for the web, JSON tokens for the CLI. One backend serving both cleanly. Happy to walk you guys through my thought process… bye.
-
𝚂𝚝𝚎𝚙𝚑𝚘𝚗 𝙷𝚎𝚛𝚗𝚊𝚗𝚍𝚎𝚣 (@DaPatternWeaver) reported@dansickles @BitWalker_ Lovable got hacked because it's centralized — exactly the problem you're ignoring. Caffeine runs on $ICP, the IDE, deployment, execution, all on-chain and tamper-proof. No AWS, no GitHub, no single point to exploit. Defending the old model while the new one can't be taken down
-
Ayush Sharma (@theayush) reported@peer_rich I am optimistic that GitHub will fix it before any solid alternative arrives