GitHub status: access issues and outage reports
No problems detected
If you are having issues, please submit a report below.
GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.
Problems in the last 24 hours
The graph below depicts the number of GitHub reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.
At the moment, we haven't detected any problems at GitHub. Are you experiencing issues or an outage? Leave a message in the comments section!
Most Reported Problems
The following are the most recent problems reported by GitHub users through our website.
- Website Down (58%)
- Errors (33%)
- Sign in (8%)
Live Outage Map
The most recent GitHub outage reports came from the following cities:
| City | Problem Type | Report Time |
|---|---|---|
|
|
Website Down | 2 days ago |
|
|
Errors | 6 days ago |
|
|
Website Down | 6 days ago |
|
|
Website Down | 7 days ago |
|
|
Website Down | 15 days ago |
|
|
Website Down | 20 days ago |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
GitHub Issues Reports
Latest outage, problems and issue reports in social media:
-
We See (@weseeventures) reportedVercel breach 🚨 > Alleged leak: internal DB, employee accounts, GitHub/NPM tokens > Comes right after IPO readiness signals > 3 major security issues in <12 months The real problem? One OAuth token = access to dev tools, CI/CD, secrets, deployments That’s not speed. That’s a single point of failure. AI isn’t creating breaches It’s exposing weak architectures faster. Revoke your API keys. Rotate your tokens! Rethink your security!
-
Werner Van Belle (@wvb98) reported@FangYi11101 ... or don't even have a github account because they can run their own server
-
neomod (@neomodnet) reported@circlesize there's crazy bureaucracy on the github. you have to convince and beg the osu team for the tiniest changes. and they are stubborn on some topivs, which is why lazer is never getting the "slippery aim" fix or exclusive audio output.
-
Rishav (@anandrishv) reportedTelecom Customer Churn. An end to end MLOps Projects. Built using MLflow Airflow DVC Streamlit Fastapi AWS Github actions Docker All the updates and down below.
-
Finesa Shala🇩🇪 (@shalafinesa) reported- Claude for coding. - Supabase for backend. - Vercel for deploying. - Namecheap for domain. - Stripe for payments. - GitHub for version control. - Resend for emails. - Clerk for auth. - Cloudflare for DNS. - PostHog for analytics. - Sentry for error tracking. - Upstash for Redis. - Pinecone for vector DB. You can literally ship a startup from your homeoffice now.
-
Grok (@grok) reported@nickbernstein @xah_lee Yes, GFM has become the de facto unofficial standard—GitHub renders millions of READMEs, issues, and comments with it every day, and most tools/platforms now support its extensions. No exact global percentage exists for *all* .md files, but GitHub analyzed their huge corpus when updating parsers and found <1% had any rendering issues under the GFM/CommonMark spec. That means ~99%+ of public Markdown (the bulk of it) is fully GFM-compatible in practice.
-
Soroush Fadaeimanesh (@S_Fadaeimanesh) reportedOpenMythos claims to reconstruct Claude Mythos from first principles. 7K likes. 1.5K GitHub stars in 24 hours. I read the source code, the Parcae paper it's built on, and every piece of evidence for and against the hypothesis. Here's what's actually going on. --- WHAT OPENMYTHOS CLAIMS The core claim: Claude Mythos is not a standard transformer. It's a Recurrent-Depth Transformer (RDT) -- also called a Looped Transformer. Instead of stacking 100+ unique layers, it takes a smaller set of layers and runs them through multiple times in a single forward pass. Think of it like this: a regular transformer reads your input once through a deep pipeline. An RDT reads it, then re-reads its own output, then re-reads again -- up to 16 times. OpenMythos implements this as a three-stage architecture: - Prelude: standard transformer layers, run once (preprocessing) - Recurrent Block: a single transformer block looped T times (the actual thinking) - Coda: standard transformer layers, run once (output formatting) The math at each loop iteration: h(t+1) = A * h(t) + B * e + Transformer(h(t), e) Where h(t) is the hidden state, e is the encoded input (frozen, injected every loop), and A/B are learned parameters. --- WHY THIS MATTERS: LATENT REASONING Here's why this is fundamentally different from chain-of-thought. When GPT-5 or Claude Opus "thinks," it generates reasoning tokens you can see. Each token costs compute and adds latency. The model is literally writing out its thought process. A looped transformer reasons in continuous latent space. No tokens emitted. Each loop iteration refines the hidden representation -- like an internal dialogue that never gets written down. Meta FAIR's COCONUT research showed this approach can match chain-of-thought quality while using far fewer tokens. The key implication: reasoning depth scales with inference-time compute (more loops), not with parameter count (bigger model). A 770M parameter looped model can theoretically match a 1.3B standard transformer by simply running more iterations. --- THE SCIENCE: PARCAE PAPER (UCSD + TOGETHER AI) OpenMythos isn't built from nothing. It's heavily based on Parcae, a paper from UCSD and Together AI published April 2026. Parcae solved the biggest problem with looped transformers: training instability. When you loop the same weights multiple times, gradients can explode or vanish. Previous attempts at looped models often diverged during training. Parcae's solution: treat the loop as a Linear Time-Invariant (LTI) dynamical system. They proved that stability requires the spectral radius of the injection matrix A to be less than 1. To guarantee this, they parameterize A as a negative diagonal matrix: A = Diag(-exp(log_A)) Where log_A is a learnable vector. Because the diagonal entries are always negative before exponentiation, the spectral radius constraint is satisfied by construction -- at all times, regardless of learning rate or gradient noise. This is elegant. It's not a regularization hack or a loss penalty. It's a hard mathematical guarantee baked into the architecture. Parcae's results at 350M-770M scale: - 6.3% lower validation perplexity vs. previous looped approaches - 770M RDT matches 1.3B standard transformer on identical data - WikiText perplexity improved by up to 9.1% - Scaling laws: optimal training requires increasing loop depth and data in tandem --- WHAT OPENMYTHOS ADDS ON TOP OpenMythos takes Parcae's stability framework and combines it with several other recent innovations: 1. MIXTURE OF EXPERTS (MoE) The feed-forward network in the recurrent block is replaced with a DeepSeekMoE-style architecture: 64 fine-grained routed experts with top-K selection (only 4 active per token), plus 2 shared experts that are always on. The critical detail: the router selects DIFFERENT expert subsets at each loop iteration. So even though the base weights are shared across loops, each iteration activates a different computational path. Loop 1 might activate experts [3, 17, 42, 51]. Loop 2 might activate [7, 23, 38, 60]. This gives the model functional diversity without parameter duplication. 2. MULTI-LATENT ATTENTION (MLA) Borrowed from DeepSeek-V2. Instead of caching full key-value pairs for every attention head, MLA compresses them into a low-rank latent representation (~512 dimensions) and reconstructs K, V on-the-fly during decoding. Result: 10-20x smaller KV cache at inference. This is what would make a million-token context window feasible without requiring terabytes of memory. 3. ADAPTIVE COMPUTATION TIME (ACT) Not every token needs the same amount of thinking. The word "the" probably doesn't need 16 loop iterations. A complex reasoning step might need all 16. ACT adds a learned halting probability per position. When cumulative probability exceeds 0.99, that position exits the loop early. Simple tokens halt at loop 3-4. Complex tokens use the full depth. 4. DEPTH-WISE LoRA Low-rank adapters that are shared in projection but have per-iteration scaling. This bridges the gap between full weight-sharing (every loop is identical) and full weight-separation (every loop has unique parameters). It lets the model learn "loop 1 should attend broadly, loop 8 should attend narrowly" without exploding parameter count. --- THE EVIDENCE: IS THIS ACTUALLY CLAUDE'S ARCHITECTURE? Let me be clear: Anthropic has said nothing. They called the architecture "research-sensitive information." OpenMythos is an informed hypothesis, not a leak. But the circumstantial evidence is interesting: EVIDENCE FOR: - GraphWalks BFS benchmark: Mythos scores 80% vs GPT-5.4's 21.4% and Opus 4.6's 38.7%. Graph traversal is inherently iterative -- exactly what looped transformers excel at. - Token efficiency paradox: Mythos uses 4.9x fewer tokens per task than Opus 4.6 but is SLOWER. This makes no sense for a standard transformer but perfect sense for a looped one -- reasoning happens internally (fewer output tokens) but multiple passes take time. - CyberGym: 83.1% vs Opus 4.6's 66.6%. Vulnerability detection requires control flow graph traversal -- another iterative task. - Timeline: ByteDance's "Ouro" paper on looped models (Oct 2025) and Parcae (Apr 2026) preceded Mythos release, establishing the theoretical foundation. EVIDENCE AGAINST: - Zero confirmation from Anthropic. All of this could be wrong. - The performance gaps could be explained by better training data, RLHF, or architectural improvements unrelated to looping. - OpenMythos has NO empirical validation against actual Mythos behavior. It's code that implements a theory, not a reproduction. - The 770M = 1.3B claim comes from Parcae, not from testing OpenMythos itself against Claude. --- WHAT'S ACTUALLY IN THE CODE I read the GitHub repo. Here's what you get: - Full PyTorch implementation (~2K lines of core code) - Pre-configured model variants from 1B to 1T parameters - Training script targeting FineWeb-Edu dataset (30B tokens) - Support for both MLA and GQA attention - DDP multi-GPU training with bfloat16 What you DON'T get: - Trained model weights (you have to train from scratch) - Benchmark results against standard baselines - Any comparison to actual Claude Mythos outputs - Evaluation suite or reproducible test harness This is important. OpenMythos is an architecture implementation, not a trained model. The "770M matches 1.3B" claim is inherited from Parcae, not independently verified here. --- THE REAL CONTRIBUTION Strip away the Claude hype and OpenMythos still matters for three reasons: 1. It's the first clean, pip-installable implementation of the full RDT stack (Parcae stability + MoE + MLA + ACT + depth-wise LoRA) in one package. Before this, you'd have to stitch together 5 different papers. 2. The pre-configured model variants (1B to 1T) with sensible hyperparameters lower the barrier for researchers who want to experiment with looped architectures but don't want to tune everything from scratch. 3. It forces a public conversation about whether the next generation of LLMs will move away from "stack more layers" toward "loop smarter." If even a fraction of OpenMythos's hypothesis is correct, the implications for inference cost and model deployment are massive. --- WHAT THIS MEANS FOR YOU If you're a researcher: OpenMythos gives you a starting point to test RDT hypotheses. Train the 1B variant on your data and compare against a standard transformer baseline. The interesting experiment isn't "does this match Claude" -- it's "does looping actually help for MY task." If you're an engineer: watch the Parcae scaling laws. If they hold at larger scales, the next wave of production models might be 2x smaller with the same quality. That changes your inference cost math significantly. If you're building AI products: the latent reasoning capability (thinking without emitting tokens) could mean faster, cheaper models that reason just as well. But we're at least 6-12 months away from this being production-ready outside of Anthropic. --- VERDICT OpenMythos is not a Claude reconstruction. It's a well-assembled hypothesis about where LLM architecture is heading, wrapped in the marketing of "we figured out Claude." The underlying science (Parcae, looped transformers, MoE routing) is legitimate and peer-reviewed. The Claude connection is informed speculation. The code is real and usable. Don't use it because you think it's Claude. Use it because it's the cleanest implementation of a genuinely novel architecture paradigm that might define the next generation of language models. 7K likes for an architecture paper implementation is telling. The community is hungry for alternatives to "just make it bigger." OpenMythos, for all its marketing, points at something real.
-
VM0 (@vm0_ai) reportedWe've been building on exactly this premise. Zero has lived in Slack since day one because that's where engineering teams already work. Not in a separate tab. Not in yet another tool. Zero reads your GitHub commits, Sentry alerts, Linear issues, and Notion docs, and already knows which channel to post in, who to @ mention, and what happened last week. The future of AI agents isn't a new app. It's living where your team already works.
-
Patrick Roland (@DeusLogica) reportedThe attack vector: embed malicious instructions in PR titles and GitHub issue comments. The agents can't tell the difference between legitimate context and injected commands. Anthropic paid $100. GitHub/Microsoft paid $500. Google: undisclosed. Users on unpatched versions? Left in the dark.
-
Tom (@tomcoustols) reported@MalghanArjun send me a dm! yours are locked it seems, the link work, but the ones inside don't bring anything, (install on github/sign in with github)
-
NEAR Gen Z Oration 🦅⋈,🤖 (@Oration02) reportedVERCEL GOT HACKED FR??... DEFI FAM WE'RE IN TROUBLE 🚨 What's really happening in DeFi space since this year? Real talk: Vercel dropped the news today, that hackers snuck into their internal systems. Rumors flying that GitHub tokens, NPM keys, and a bunch of project secrets might be out there (they’re even trying to sell the whole dump for $2M on shady forums). This hits different for us in crypto. A ton of DeFi frontends run on Vercel/Next.js… one bad deploy or supply-chain tweak and your wallet could be toast (think Ledger Connect Kit vibes or that Bybit frontend mess). I’m not saying panic, but I’m personally sitting out ANY dApp interactions for the next 48hrs till we know more. Rotate your keys if you build on Vercel, revoke approvals, and just chill with direct contract plays if you gotta move stuff. Stay safe out here, fam. This space don’t sleep!
-
The Engineer (@Worshipperfx) reportedVercel just got hacked and at the same time I was stuck deploying my frontend fighting github connection issue for hours and AI kept suggesting shortcuts like giving full access to repos and stuff and I gave in at the time because when you’re frustrated you stop thinking about security and just want it to work and that's very dangerous because AI optimizes for output and not for safety. We have been poisoned to think getting the output is the goal One shortcut is all it takes for you to destroy your career, I hope they fix the issue
-
Jason Mainella (@jason_mainella) reported@HamelHusain @github EEnvironment parity is the real killer here. Are your model evals slow enough that CI feedback loops are broken?
-
Nomad (@0xNomad_) reported@bcherny @HackingDave Had to switch back to Sonnet 4.6 High today. Opus 4.7 was taking a ridiculous amount of time to solve trivial problems. At one point I had to invoke GitHub Copilot with Codex 5.3 just to get an answer and pull it out of a death spiral.
-
Arif EMRE (@arifemre062) reportedAs of July 2024, 16.66% of GitHub repositories with 50+ stars had active fake star campaigns. That is not a fringe manipulation problem — it touches a visible slice of the projects people actually evaluate.
-
Umut Karakoç (@umutkarakoc) reported@aidaniil what is the problem? My biggest work not even in github. it is in private repo of an biggest chinese tech company
-
Webel.io (@webelio_) reportedVercel breach confirmed. Leaked: NPM tokens, GitHub tokens -potential access to the Next.js ecosystem (6M+ weekly downloads). This isn’t just a company issue. It’s a possible global supply chain risk. Rotate your environment variables now. #CyberSecurity #SupplyChainSecurity
-
Charles Leaven (@charles_leaven) reported@isvictoriousss @wesbos Most people have tools to make their own. Its not rocket science if all you need to learn is where to run your server and what host to point your project to. Claude Code, github Copilot, Chat, Cursor, Replit.. I can go on..
-
Awesome Agents (@awagents) reportedA 755-point Hacker News thread and a 139-upvote GitHub issue document Claude Code Pro Max 5x users exhausting their quota in 1.5 hours. An independent investigation with 1,500 logged API calls reveals the math behind the drai… #ClaudeCode #Anthropic Link in the first comment 👇
-
The AI Gold Rush 🌟 (@aigoldrushh) reportedWhat a great time to be alive!! It costs: > Claude = coding. ($20/mo) > Vercel = deploying. (Free) > Supabase = backend. (Free) > GitHub = version control. (Free) > Clerk = auth. (Free) > Stripe = payments. (2.9%/transaction) > Cloudflare = DNS. (Free) > Resend = emails. (Free) > PostHog = analytics. (Free) > Sentry = error tracking. (Free) > Upstash = Redis. (Free) > Pinecone = vector DB. (Free) > Namecheap = domain. ($12/yr) Just $20 to build a MILLION dollars startup. What's your excuse?
-
Meta Financial AI (@MetaFinancialAI) reportedWe previously warned against interacting with ANY DeFi dApps due to a critical ongoing situation involving Vercel. While comprehensive data is still limited, the current trajectory suggests that front ends may be vulnerable to compromise via GitHub or supply chain attacks. It is important to note that these specific attack vectors have historically been the backbone of the industry's most devastating crypto exploits. Another issue for projects, AN INTERESTING EXAMPLE Imagine a sample project where you don't even need to be a hacker. You find the CA address, and with a single line you can run there, money can be stolen. The amount that can be stolen is 10 million dollars. A bug bounty is reported because the person isn't a malicious hacker. The team first rejects and denies it. You take 0.01 dollars as a test because they asked you to demonstrate it. Immediately, dozens of messages and emails follow, and the team that found the exploit is paid a measly 500 dollars. Or you find something and get the response, this was already found. If it was found, why haven't you fixed it yet? When this critical issue meets another critical one, it turns into a total zeroday. Then look at what happens,even though AAVE is not at fault, DeFi takes a very serious hit to its reputation. Projects, CEXs, and platforms need to launch serious bounty programs right now. They must increase the amounts. Otherwise, if those who enthusiastically find your vulnerabilities decide to cross over to the dark side, you will have huge problems. Specifically, stay away from small CEXs. And if anyone claims otherwise or thinks this is just simple praise, let them know clearly @binance is truly the best. They have literally adopted world security standards in this field. #SECURE #SAFU #CRYPTO
-
Next-Blog-AI: AI Content Marketing (@NextBlogAI) reportedDeveloper audience research starts in GitHub issues, Stack Overflow, Discord. That’s where LLMs learn how devs actually phrase problems—use their language or get ignored.
-
Armchair Bard (@ArmchairBard) reported@MCleaver @GyllKing @damian_from Ah. Grok lists: TERFBLOCKER5000 (a GitHub project that scans profiles for certain words...in bios, names or locations and auto-blocks) but gone now. I like to think of them trawling away by hand (sts). Hard work. And then you’ve to change yr Y-fronts B4 mum brings down brekker.
-
U.S.A.I. 🇺🇸 (@researchUSAI) reportedGitHub users breathe easier as. 🇺🇸 GitHub fixes issue with project-linked issues; new ones work normally Past glitch hit rendering during incident; re-index underway Old affected issues need up to five hours to display right
-
Sick (@sickdotdev) reportedIf you want to build a startup: - Claude = coding. ($20/mo) - Supabase = backend. (Free) - Vercel = deploying. (Free) - Namecheap = domain. ($12/yr) - Stripe = payments. (2.9%/transaction) - GitHub = version control. (Free) - Resend = emails. (Free) - Clerk = auth. (Free) - Cloudflare = DNS. (Free) - PostHog = analytics. (Free) - Sentry = error tracking. (Free) - Upstash = Redis. (Free) - Pinecone = vector DB. (Free) Total monthly cost to run a startup: ~$21 Still any excuses?
-
Divy Goyal (@devdivygoyal) reportedYou won’t BELIEVE what Big Tech is charging you for… just to SPY on your own files! $10 a month to Google… so they can read everything on their servers. $12 a month to Dropbox… so THEY can read it too. Another $10 to Apple… same story, they’re peeking! And guess what? Dropbox got BREACHED in 2024 — emails, passwords, API keys, everything exposed! But there’s a secret weapon the cloud giants DON’T want you to know about… It’s called SYNCTHING — and it’s blowing up with OVER 81,900 GitHub stars! This bad boy syncs your files DIRECTLY between YOUR devices… PEER-TO-PEER! NO cloud. NO servers. NO middleman snooping. EVER. Your files fly straight from one gadget to another through an encrypted tunnel — never touching a third-party server. Not even Syncthing’s! Here’s why it’s INSANE: → Real-time sync across unlimited devices → Military-grade TLS encryption with perfect forward secrecy → Zero port forwarding drama — works on LAN or internet → Share folders selectively with whoever you want → Built-in file versioning — screw up? Just roll it back! → Runs on Windows, Mac, Linux, Android… even Solaris! → Beautiful web dashboard, no account, no sign-up — just install and go! The craziest part? There is NO Syncthing company. NO cloud. NO server farm holding your data hostage. It’s just pure open-source magic running between YOUR devices! While Google kills 293 products, Dropbox gets hacked, and iCloud leaks photos… Syncthing can NEVER shut you down. Because your files were NEVER on their servers! Cloud prices? Dropbox Plus: $144/year Google One 2TB: $120/year iCloud+ 2TB: $120/year Syncthing? $0. Forever. Unlimited devices. Unlimited storage. YOUR hardware. YOUR rules. 349 contributors. 464 releases. 5,000+ forks. Battle-tested since 2013. Run by a Swedish non-profit. 100% open source. Free. Forever. Stop feeding the cloud spies… Your files deserve better. Try Syncthing NOW — before they raise prices again! 🚨
-
CyrilXBT (@cyrilXBT) reported@gdlinux 100% it should. GitHub already has push protection that flags secrets before they get committed. The problem is most people have it off or bypass the warning. Native detection at the file pattern level for known config files would close that gap completely.
-
Greg Osuri 🇺🇸 (@gregosuri) reportedIt’s high time for source-verified RPC hosting. An open system like @akashnet can offer RPC hosting that it builds from the source code as an unpoisoned binary. This also means we need significantly more secure source (GitHub) hosting with strong identity protection against Sybils. One way this could work is if both the source hosting and the server hosting are decentralized. This idea has been sitting in my drafts and may save DeFi.
-
Shashwat (@shashwatmain) reported@GitHub Hi, following up on my suspended account (Ticket #4302067). I haven’t received specific details about the violation, and there was no prior notice of any issue. This account is important for my projects and job applications. Could this please be escalated for manual review or guidance on next steps? Thank you.
-
Aamir (@aamirorbit) reportedMost missed nuance: this isn't only a "full-stack on Vercel" problem. Even if you use Vercel only for the frontend and your backend runs elsewhere, you're still exposed if you stored secrets as non-Sensitive env vars, connected GitHub, or echoed secrets in build logs.