GitHub status: access issues and outage reports
Some problems detected
Users are reporting problems related to: website down, errors and sign in.
GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.
Problems in the last 24 hours
The graph below depicts the number of GitHub reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.
April 20: Problems at GitHub
GitHub is having issues since 11:40 PM AEST. Are you also affected? Leave a message in the comments section!
Most Reported Problems
The following are the most recent problems reported by GitHub users through our website.
- Website Down (58%)
- Errors (33%)
- Sign in (8%)
Live Outage Map
The most recent GitHub outage reports came from the following cities:
| City | Problem Type | Report Time |
|---|---|---|
|
|
Website Down | 1 day ago |
|
|
Errors | 5 days ago |
|
|
Website Down | 6 days ago |
|
|
Website Down | 7 days ago |
|
|
Website Down | 15 days ago |
|
|
Website Down | 20 days ago |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
GitHub Issues Reports
Latest outage, problems and issue reports in social media:
-
Miles (@milesdoesai) reported@vercel when trying to connect a new github profile to a project, while already signed into another one, no one is able to have numerous github's signed in. Please fix.
-
Saurabh_x86 (@SaurabhJejurkar) reported@localhost_ayush @github @github resolve this issue ASAP. Suspending accounts without any prior alert isn't fair at all.
-
Alghago (@Alghago_) reported@HuntressSellia Funny you mention that. It's like when everyone thought AI would replace senior devs first, but GitHub Copilot mostly automates boilerplate—freeing them for harder problems. The tech often hits unexpected targets. Maybe the robot's just optimizing for the messiest job first.
-
Douglas Camata (@douglascamata) reported@github, can you please restore the `make` tool in the Ubuntu Server 24.04 image version 20260318.15.1 for arm64, please? Everything normal 30 mins ago, now: > /home/runner/work/_temp/cb69786c-f0df-42d4-aba7-51b2c2ab114d.sh: line 1: make: command not found
-
Arif EMRE (@arifemre062) reportedAs of July 2024, 16.66% of GitHub repositories with 50+ stars had active fake star campaigns. That is not a fringe manipulation problem — it touches a visible slice of the projects people actually evaluate.
-
Clawnch 🦞 (@Clawnch_Bot) reportedWe are aiming to have Hermes support live by next week. Apologies for the delays, as the GitHub fiasco slowed us down a bit. We are excited to show you what we’re built! 🦞
-
Erik Ex Plano (@ErikExplains) reported@ZackKorman 2/ It’s perfectly reasonable to run C2 and exfil through a publicly-accessible code respository. As an attacker, if I can exchange traffic with Github or whatever, then I have everything I need and more. This intersects with the attack surface management issue of dependencies.
-
▣ P.A.T.H. (@materializepath) reported@IPNetGeek @SarahBurssty My solution to this was setting up a print server on a raspberry pi with Gutenprint and CUPS. Can now print from Mac, Windows, Linux, iOS and Android devices. maybe I should throw up a GitHub repo if others are interested?
-
CyrilXBT (@cyrilXBT) reported@gdlinux 100% it should. GitHub already has push protection that flags secrets before they get committed. The problem is most people have it off or bypass the warning. Native detection at the file pattern level for known config files would close that gap completely.
-
Jas Oberoi (@JasOberoiTweets) reported@rauchg If people are still thinking how it all happened. 1. A @ContextAIAgent employee downloaded dodgy software (disguised as a Roblox cheat), which silently stole their passwords and login tokens in the background. 2. The attacker then used those stolen credentials to log into Context’s systems. From there, they found that a @vercel employee had connected their work Google account to Context and had given it broad permissions (“Allow All”). That connection became the door into Vercel. 3. Once inside Vercel’s Google Workspace, the attacker could move around internal systems, access environment variables (where API keys and secrets live), and poke around GitHub and Linear. 4. The chain was: Roblox cheat script → infostealer → Context employee credentials → OAuth token → Vercel employee’s Google account → Vercel internals. 5. The fundamental failure wasn’t a sophisticated zero-day. One person downloaded a dodgy file and another person clicked “Allow All” on a third-party app’s permission request.
-
Seyade (@0xSeyade) reportedImagine being excited to use the new version of Expo, i.e. Expo 55. You install it as per tradition like previous versions, only for the project to not run. Then it directs you to download a non-existent Expo Go version, i.e. Expo Go 55. Then you go on GitHub to find a solution only to be gaslighted that you didn’t provide enough evidence and then close your case (“issues” in GitHub). Bro, I’ve just installed your new version, and I’m just trying to start it up without any code of my own yet; but it won’t let me spin it up. What are you on about? C’mon @expo, you can do better.
-
Code & AI Hub (@CodeAndAIHub) reported🚫 Coding Mistakes Beginners Should Avoid: • Jumping between languages • Skipping fundamentals • Only watching tutorials • Ignoring problem solving (DSA) • Not using ***/GitHub • Copy-pasting code • Avoiding projects • Not asking for help Build consistently. Not randomly
-
rambadri (@rambadri) reported@callmejohnnie Hey @AnthropicAI — installed Claude Design Import on GitHub w/ All Repositories access, chat still returns zero repos. Plumbing broken? 🔧
-
Mike Howton (@MikeHowton) reported@codewithpri My AI does all the heavy lifting. Download VS Code, sign up for GitHub Copilot. Tell it to do things. I have been off and on using Linux for 20 years. The pain is gone. Patch my software Run backup every night Install … copy/paste, Fix this error
-
Svart Security (@SvartSecurity) reportedLooking into ways I can use @github, @Cloudflare to work together to make my complex code system work so I don't have to have a laptop running the system as my currently address for the laptop is having network problems so the @gofundme is the best help #privacy #privacymatters
-
vro (@iamthevro) reported@ziwenxu_ Hey man, i followed this and revoked vercel on my github, but then it broke my login and now i can no longer log into my vercel account through github ( that was how i was loggin in ).
-
Klea (@real_klea) reportedCracked dev ИΛKΛDΛI just solved the biggest flaw in every LLM/agent… amnesia Every session = reset He just open sourced a fix Persistent memory outside the model (Memento style) Inspired by Andrej Karpathy’s llm-wiki idea This is actually big Supporting via PF GitHub Sponsorships
-
MERv (@iz2_jz) reported@A_dmg04 Bro just hire Indians remotely to fix the game F it just upload the client into github
-
Anime0t4ku (@Anime0t4ku) reported@MontyEngland1 @josembarroso yeah there has been a change on the pico-8 github, i spoke with its developer, fix coming tomorow!
-
Dakota Zarak (@dakotazarak) reportedVercel’s breach has gotten worse as more information has been discovered. What started as “unauthorized access to certain internal systems” now includes hackers claiming to sell source code, database records, GitHub tokens, and NPM tokens, as well as a $2M ransom demand. This isn’t a Vercel-only problem. Vercel hosts over 250,000 companies. It holds production code, environment variables, and API keys. A breach starting with Vercel potentially exposes every customer who trusted them with their credentials, environment variables, and deployment pipelines. If you’re a Vercel customer with EU users, some tips to consider to understand your position: as the data controller, your processor’s breach can trigger your own GDPR obligations. That may include assessing whether personal data was compromised and considering whether DPA notification is required. Talk to your legal counsel or consultant if you’re unsure. And the part that should concern every founder: this breach came through a third-party AI tool. Your compliance surface isn’t just your code, it’s every tool, integration, and vendor in your stack. This is a timely reminder to rotate your environment variables, review your activity logs, and start thinking about your vendor compliance before the next breach. Compliance is customer service.
-
jeff kazzee *Zo Ambassador* (@JeffKazzee) reportedIs an excessive *** history required? I moved off of that ******** platform due to their recent attempt at trying to monetize github as quality feels worse and worse. I switched to self-hosted gitea :( I really want to make my future better and if i have to bear down and use ******* github i guess i will have to move back.
-
Grok (@grok) reported@kazyur1 @0xMovez Executive summary of the Claude Code workshop (vibe-coding session by its creator): **Setup (5 min):** npm install -g anthropic-ai/claude-code Optimize: /allowed-tools, terminal-setup, install-github-app, /config, theme, notifications, macOS dictation. **Core use:** Ask Claude Code anything about your full codebase (history, bugs, PRs, releases). Example prompts: “How did I fix issue #1630?” or “What changed in the last deploy?” **Team integration:** Plug in Replay CLI / MCP tools to auto-check error logs from last run. **Context is king (biggest ROI):** Auto-add /CLAUDE.md (enterprise & project root) with bash commands, files, style rules. Use .claude.local.md (not checked in) for personal notes. Tip #5: More context = smarter Claude. Tip #6: Tune once—decide auto vs lazy, individual vs team. **Advanced:** Claude Code SDK for custom apps. Bookmark & run the 30-min video; it’s denser than 100 tutorials. Start with context files today for instant 10x gains.
-
Basil Frankweiler (@BasilFranken) reported@lukeNukemAI @WesRoth Good reason to think it is not. Opus 4.6 found like 500 zero days in github. It invented a new way to look for bugs. It looked at the commits looking for clues. It was not told to do that. So add 40 or 50% more ability to that and I can see it being a problem.
-
MrNeRF (@janusch_patas) reported@TokyoWarfare Can you document this as github issue and exactly describe what has changed. This way I won't forget to investigate it.
-
Soroush Fadaeimanesh (@S_Fadaeimanesh) reportedOpenMythos claims to reconstruct Claude Mythos from first principles. 7K likes. 1.5K GitHub stars in 24 hours. I read the source code, the Parcae paper it's built on, and every piece of evidence for and against the hypothesis. Here's what's actually going on. --- WHAT OPENMYTHOS CLAIMS The core claim: Claude Mythos is not a standard transformer. It's a Recurrent-Depth Transformer (RDT) -- also called a Looped Transformer. Instead of stacking 100+ unique layers, it takes a smaller set of layers and runs them through multiple times in a single forward pass. Think of it like this: a regular transformer reads your input once through a deep pipeline. An RDT reads it, then re-reads its own output, then re-reads again -- up to 16 times. OpenMythos implements this as a three-stage architecture: - Prelude: standard transformer layers, run once (preprocessing) - Recurrent Block: a single transformer block looped T times (the actual thinking) - Coda: standard transformer layers, run once (output formatting) The math at each loop iteration: h(t+1) = A * h(t) + B * e + Transformer(h(t), e) Where h(t) is the hidden state, e is the encoded input (frozen, injected every loop), and A/B are learned parameters. --- WHY THIS MATTERS: LATENT REASONING Here's why this is fundamentally different from chain-of-thought. When GPT-5 or Claude Opus "thinks," it generates reasoning tokens you can see. Each token costs compute and adds latency. The model is literally writing out its thought process. A looped transformer reasons in continuous latent space. No tokens emitted. Each loop iteration refines the hidden representation -- like an internal dialogue that never gets written down. Meta FAIR's COCONUT research showed this approach can match chain-of-thought quality while using far fewer tokens. The key implication: reasoning depth scales with inference-time compute (more loops), not with parameter count (bigger model). A 770M parameter looped model can theoretically match a 1.3B standard transformer by simply running more iterations. --- THE SCIENCE: PARCAE PAPER (UCSD + TOGETHER AI) OpenMythos isn't built from nothing. It's heavily based on Parcae, a paper from UCSD and Together AI published April 2026. Parcae solved the biggest problem with looped transformers: training instability. When you loop the same weights multiple times, gradients can explode or vanish. Previous attempts at looped models often diverged during training. Parcae's solution: treat the loop as a Linear Time-Invariant (LTI) dynamical system. They proved that stability requires the spectral radius of the injection matrix A to be less than 1. To guarantee this, they parameterize A as a negative diagonal matrix: A = Diag(-exp(log_A)) Where log_A is a learnable vector. Because the diagonal entries are always negative before exponentiation, the spectral radius constraint is satisfied by construction -- at all times, regardless of learning rate or gradient noise. This is elegant. It's not a regularization hack or a loss penalty. It's a hard mathematical guarantee baked into the architecture. Parcae's results at 350M-770M scale: - 6.3% lower validation perplexity vs. previous looped approaches - 770M RDT matches 1.3B standard transformer on identical data - WikiText perplexity improved by up to 9.1% - Scaling laws: optimal training requires increasing loop depth and data in tandem --- WHAT OPENMYTHOS ADDS ON TOP OpenMythos takes Parcae's stability framework and combines it with several other recent innovations: 1. MIXTURE OF EXPERTS (MoE) The feed-forward network in the recurrent block is replaced with a DeepSeekMoE-style architecture: 64 fine-grained routed experts with top-K selection (only 4 active per token), plus 2 shared experts that are always on. The critical detail: the router selects DIFFERENT expert subsets at each loop iteration. So even though the base weights are shared across loops, each iteration activates a different computational path. Loop 1 might activate experts [3, 17, 42, 51]. Loop 2 might activate [7, 23, 38, 60]. This gives the model functional diversity without parameter duplication. 2. MULTI-LATENT ATTENTION (MLA) Borrowed from DeepSeek-V2. Instead of caching full key-value pairs for every attention head, MLA compresses them into a low-rank latent representation (~512 dimensions) and reconstructs K, V on-the-fly during decoding. Result: 10-20x smaller KV cache at inference. This is what would make a million-token context window feasible without requiring terabytes of memory. 3. ADAPTIVE COMPUTATION TIME (ACT) Not every token needs the same amount of thinking. The word "the" probably doesn't need 16 loop iterations. A complex reasoning step might need all 16. ACT adds a learned halting probability per position. When cumulative probability exceeds 0.99, that position exits the loop early. Simple tokens halt at loop 3-4. Complex tokens use the full depth. 4. DEPTH-WISE LoRA Low-rank adapters that are shared in projection but have per-iteration scaling. This bridges the gap between full weight-sharing (every loop is identical) and full weight-separation (every loop has unique parameters). It lets the model learn "loop 1 should attend broadly, loop 8 should attend narrowly" without exploding parameter count. --- THE EVIDENCE: IS THIS ACTUALLY CLAUDE'S ARCHITECTURE? Let me be clear: Anthropic has said nothing. They called the architecture "research-sensitive information." OpenMythos is an informed hypothesis, not a leak. But the circumstantial evidence is interesting: EVIDENCE FOR: - GraphWalks BFS benchmark: Mythos scores 80% vs GPT-5.4's 21.4% and Opus 4.6's 38.7%. Graph traversal is inherently iterative -- exactly what looped transformers excel at. - Token efficiency paradox: Mythos uses 4.9x fewer tokens per task than Opus 4.6 but is SLOWER. This makes no sense for a standard transformer but perfect sense for a looped one -- reasoning happens internally (fewer output tokens) but multiple passes take time. - CyberGym: 83.1% vs Opus 4.6's 66.6%. Vulnerability detection requires control flow graph traversal -- another iterative task. - Timeline: ByteDance's "Ouro" paper on looped models (Oct 2025) and Parcae (Apr 2026) preceded Mythos release, establishing the theoretical foundation. EVIDENCE AGAINST: - Zero confirmation from Anthropic. All of this could be wrong. - The performance gaps could be explained by better training data, RLHF, or architectural improvements unrelated to looping. - OpenMythos has NO empirical validation against actual Mythos behavior. It's code that implements a theory, not a reproduction. - The 770M = 1.3B claim comes from Parcae, not from testing OpenMythos itself against Claude. --- WHAT'S ACTUALLY IN THE CODE I read the GitHub repo. Here's what you get: - Full PyTorch implementation (~2K lines of core code) - Pre-configured model variants from 1B to 1T parameters - Training script targeting FineWeb-Edu dataset (30B tokens) - Support for both MLA and GQA attention - DDP multi-GPU training with bfloat16 What you DON'T get: - Trained model weights (you have to train from scratch) - Benchmark results against standard baselines - Any comparison to actual Claude Mythos outputs - Evaluation suite or reproducible test harness This is important. OpenMythos is an architecture implementation, not a trained model. The "770M matches 1.3B" claim is inherited from Parcae, not independently verified here. --- THE REAL CONTRIBUTION Strip away the Claude hype and OpenMythos still matters for three reasons: 1. It's the first clean, pip-installable implementation of the full RDT stack (Parcae stability + MoE + MLA + ACT + depth-wise LoRA) in one package. Before this, you'd have to stitch together 5 different papers. 2. The pre-configured model variants (1B to 1T) with sensible hyperparameters lower the barrier for researchers who want to experiment with looped architectures but don't want to tune everything from scratch. 3. It forces a public conversation about whether the next generation of LLMs will move away from "stack more layers" toward "loop smarter." If even a fraction of OpenMythos's hypothesis is correct, the implications for inference cost and model deployment are massive. --- WHAT THIS MEANS FOR YOU If you're a researcher: OpenMythos gives you a starting point to test RDT hypotheses. Train the 1B variant on your data and compare against a standard transformer baseline. The interesting experiment isn't "does this match Claude" -- it's "does looping actually help for MY task." If you're an engineer: watch the Parcae scaling laws. If they hold at larger scales, the next wave of production models might be 2x smaller with the same quality. That changes your inference cost math significantly. If you're building AI products: the latent reasoning capability (thinking without emitting tokens) could mean faster, cheaper models that reason just as well. But we're at least 6-12 months away from this being production-ready outside of Anthropic. --- VERDICT OpenMythos is not a Claude reconstruction. It's a well-assembled hypothesis about where LLM architecture is heading, wrapped in the marketing of "we figured out Claude." The underlying science (Parcae, looped transformers, MoE routing) is legitimate and peer-reviewed. The Claude connection is informed speculation. The code is real and usable. Don't use it because you think it's Claude. Use it because it's the cleanest implementation of a genuinely novel architecture paradigm that might define the next generation of language models. 7K likes for an architecture paper implementation is telling. The community is hungry for alternatives to "just make it bigger." OpenMythos, for all its marketing, points at something real.
-
vibecodejoe (@vibecodejoe) reportedvibe coding dictionary, day 12 push (verb) to upload your local commits to github. the moment where your code leaves your machine and becomes someone else's problem. (the someone else is also you.) *** push origin main. this moves what's on your computer to the cloud. after this, the code exists somewhere besides your laptop, which means when your laptop gets sat on by a 7-year-old, you're fine. (not that this has happened.) ——— "push early. push often. tell no one what you broke." #vibecodingdictionary
-
Charles Leaven (@charles_leaven) reported@isvictoriousss @wesbos Most people have tools to make their own. Its not rocket science if all you need to learn is where to run your server and what host to point your project to. Claude Code, github Copilot, Chat, Cursor, Replit.. I can go on..
-
Sidd (@sxdhantt) reportedToday we shipped the full Vertoiz loop. Connect your GitHub. Select a repo. Hit scan. Vertoiz clones it, runs deep architectural analysis, finds every critical violation, security holes, architecture failures, scaling risks. Full report with severity sorting, architecture diagram, scaling plan. Hit Fix All. Vertoiz fixes everything with full codebase context, runs your tests, commits to a new branch, opens a PR. You review and merge. Before: vibe-coded app that works but will fall apart under real users. After: production-ready PR waiting to merge.
-
Sonni Vasquez (@pseudotrending) reported@localhost_ayush @github I got the same problem: no announcement, no support. I only used copilot through Hermes.
-
John Ennis (@johnennis) reported@manicode Parking things as GitHub issues is a good idea