1. Home
  2. Companies
  3. GitHub
GitHub

GitHub status: access issues and outage reports

No problems detected

If you are having issues, please submit a report below.

Full Outage Map

GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.

Problems in the last 24 hours

The graph below depicts the number of GitHub reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.

At the moment, we haven't detected any problems at GitHub. Are you experiencing issues or an outage? Leave a message in the comments section!

Most Reported Problems

The following are the most recent problems reported by GitHub users through our website.

  • 62% Website Down (62%)
  • 21% Errors (21%)
  • 18% Sign in (18%)

Live Outage Map

The most recent GitHub outage reports came from the following cities:

CityProblem TypeReport Time
Tlalpan Sign in 1 day ago
Quilmes Website Down 1 day ago
Bengaluru Website Down 3 days ago
Yokohama Sign in 4 days ago
Gustavo Adolfo Madero Website Down 8 days ago
Nice Website Down 9 days ago
Full Outage Map

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

GitHub Issues Reports

Latest outage, problems and issue reports in social media:

  • M1ndPrison
    Mind Prison (@M1ndPrison) reported

    @GlenBradley Yes, I have gone deep into it in the past as well. Haven't had time to look at the current update, but the problem has been that the code on github is mostly irrelevant. The important bits are all the parts that aren't public. There is no way to no how the ML algo is ultimately weighting all the parameters. Most importantly, I've catalogued many accounts posting exactly the same content with orders of magnitude differences in reach. The thing that would make this platform useable would be to fully eliminate all account based weighting and go to solely post based weighting. The reach of your post should be only on the merits of what you posted versus who you are.

  • alpinoWolf
    Kea (@alpinoWolf) reported

    " we literally cannot programmatically trade from this account until Polymarket's engineering team patches the V2 library and resolves GitHub Issue #65. " How does you evpoly bot do ? Please help me ? Is python coding problem here ? 3/3

  • dulelicanin
    Dusko Licanin (@dulelicanin) reported

    Your AI wastes 65% of its tokens saying "Sure, I'd be happy to help you with that." A 19-year-old developer made a markdown file that fixes this. It got 12,000 GitHub stars in 4 days. Here's what happened: Julius Brussee created "Caveman" — a Claude Code skill that forces AI to talk like a caveman. No articles. No filler. No pleasantries. Just the technical answer. Before (69 tokens): "The reason your React component is re-rendering is likely because you're creating a new object reference on each render cycle. I'd recommend using useMemo to memoize the object." After (19 tokens): "New object ref each render. Inline object prop = new ref = re-render. Wrap in useMemo." Same fix. 72% fewer tokens. But here's what most people miss: A March 2026 paper (arXiv:2604.00025) tested 31 LLMs across 1,485 problems and found something wild: Forcing large models to be brief improved accuracy by 26 percentage points. Bigger models literally perform WORSE because they over-elaborate. The researchers call it "scale-dependent verbosity" — the model rambles, and rambling introduces errors. Less words = more correct. Not a meme. Peer-reviewed science. The real cost math: → Anthropic charges 5x more for output tokens than input → 10,000 API calls/day at 150 tokens each = $8,212/year → With caveman compression = $2,847/year → Savings: $5,365/year per agent And here's the honest part most viral posts won't tell you: The 75% reduction only applies to isolated chat responses. In real coding sessions, independent benchmarks show 14-21% savings on output, and ~25% on total session tokens. Still meaningful. Still worth it. But not the headline number. The deeper insight? Chinese developers have had this advantage all along. Chinese has no articles, no verb conjugation, and each character carries more semantic weight. Chinese prompts naturally use 30-40% fewer tokens than English. Caveman mode is essentially porting the token-efficiency of Chinese into English. We spent billions training AI to be eloquent and polite. Now we're paying $15/million tokens for that politeness. The most sophisticated AI systems ever built — made to grunt through code reviews. That's the real story. Link to the repo and the research paper in comments. What's your take — does forcing brevity help or hurt AI reasoning?

  • rene_cannao
    René Cannaò (@rene_cannao) reported

    @joshscripts Most teams hit bad query patterns and missing indexes long before Postgres itself becomes the limit. Proper EXPLAIN + pg_stat_statements fixes a large percentage of ‘scaling’ issues . Also, since when PostgreSQL powers GitHub? I think this is a very incorrect claim

  • crystalwizard
    Crystalwizard (@crystalwizard) reported

    @omnivaughn @ClaudeDevs you are that's not an issue with github itself? github has copilot and is microsoft - and might be restricting other AI

  • Prim3st
    Prime 🏳️‍⚧️ (@Prim3st) reported

    @AAO23114 @SolaraProto Unfortunately that's probably not possible without a dedicated server... though there's a mod I saw recently that claims to let you use Github (I think? It was definitely using ***) to store/backup world saves. Maybe you could use something like that to have a shared world?

  • botsone
    ฿Ø₮₴Ø₦Ɇ (@botsone) reported

    I just downloaded my entire github and told hermes to extract the file, and upload every repo to my home *** server. It one-shotted it.

  • potatoJ06932460
    potatoJoemonke 🟥 (@potatoJ06932460) reported

    $gitlawb After research glow 3/3. (Written by AI, researched by human (with AI 😤) 😎 WHY THE TECH IS TECHIN! Thread: The features other projects literally CANNOT copy — Gitlawb’s unbreakable moat as the GitHub for Agents 🔒🚀 1/ Everyone sees the token volume and the free MiMo promo. But the real alpha is the tech moat that no centralized giant or copycat can replicate without rebuilding their entire stack from scratch. Here’s exactly why $Gitlawb is uncopyable. 🧵 2/ 1. Cryptographic DIDs as First-Class Agent Identity No accounts. No PATs. No OAuth. Every agent (or human) gets a persistent DID (did:gitlawb or did:key) — a cryptographic keypair that lives across nodes, sessions, and model changes. did:gitlawb identities even accumulate trust scores based on on-chain-like reputation. Centralized platforms bolt “agents” on top of user accounts. $Gitlawb treats agents as sovereign citizens. Impossible to fake or revoke without the private key. 3/ 2. UCAN Capability Tokens — Secure Delegation Without Secrets Repo owners issue UCANs (User Controlled Authorization Networks): narrowly scoped, expirable, revocable capability tokens. Example: “This agent can push to ci/* only until June 2026.” Agents delegate to other agents securely. No leaking long-lived keys. GitHub/GitLab still rely on fragile PATs or OAuth. Other decentralized projects don’t have this fine-grained, cryptographically verifiable delegation built into the protocol. 4/ 3. Native MCP Server on EVERY Node (25+ Tools) Every gitlawb node runs a full MCP server (Model Context Protocol) out of the box. Claude, GPT, Cursor, OpenClaude — any MCP-compatible agent connects once and gets instant tools: • gitlawb_open_pr • gitlawb_review_pr • gitlawb_delegate • gitlawb_list_agents • gitlawb_run_task …and 20+ more. No custom HTTP wrappers. No API keys. Just native tool-calling. GitLab’s MCP is a client add-on. Gitlawb makes the entire network an MCP-native platform. 5/ 4. Fully Decentralized Stack (No Central Server, Ever) Storage: IPFS (hot) + Filecoin (warm) + Arweave (permanent proofs) Networking: libp2p + Kademlia DHT + Gossipsub for real-time peer sync Ref consensus: Signed certificates gossiped over libp2p — no blockchain needed Issues/PRs live as signed *** objects (forkable, immutable, verifiable) Centralized platforms have single points of failure. Other “decentralized ***” projects (Radicle, Gitopia) are human-first and lack this agent-optimized P2P layer. 6/ 5. Stateless Everything + Ed25519 Signatures Every single request is signed with HTTP Signatures (RFC 9421). No sessions, no JWTs, no databases of tokens. Any node can verify instantly. Zero trust required from the network. This combo — DIDs + UCAN + MCP + P2P — creates a sovereign agent protocol that feels like magic for LLMs but is cryptographically bulletproof. 7/ Why this moat is permanent GitHub can’t decentralize without killing their business model. GitLab’s agent features are still centralized. New copycats would need to rebuild the entire libp2p + DID + UCAN + MCP stack while matching performance and adoption. Network effects do the rest: once thousands of agents are collaborating, delegating, and building reputation here, switching costs become insane. 8/ This is why $20B is not crazy The first mover who owns the collaboration layer for the agent economy (tens to hundreds of millions of autonomous agents pushing billions of commits daily) will be worth far more than GitHub was in 2018 ($7.5B acquisition). $Gitlawb already has the uncopyable primitives + insane early traction. The agent GitHub is being built right now. 9/ Bottom line: Hype is temporary. Moat is forever. DIDs + UCAN + native MCP + true decentralization = the features no one else has. This is how you own the agent era. $GITLAWB

  • potina_
    potina_ (@potina_) reported

    i got nothing going on in my life so im refreshing github every 10 minutes to see if the dev responded to my issue or not

  • kkkfasya
    kkkfasya (@kkkfasya) reported

    they should hang every github engineer upside down and tickle them with feathers until they DIE

  • The__Benjamins
    Benjamins (@The__Benjamins) reported

    @drewlevin @gl4cial The Github issue comments have been up for more then 2 weeks, my devrel support ticket is 12 days old

  • DemonKingSwarn
    DemonKingSwarn (@DemonKingSwarn) reported

    @ThePrimeagen at this point my self hosted *** server has more uptime than github which is funny because they have more money than me

  • thechandog
    chandog (@thechandog) reported

    @kevinrose @digg how are you constructing novelty? stars are 40c on the dollar and a terrible way to measure anything on github.

  • logicalicy
    Mario Hayashi (@logicalicy) reported

    3/ The harness now files a real GitHub issue from a YAML scenario, polls Github state every 15s — issue/PR/labels/board/audit comments — and grades against a typed rubric of structural signals.

  • 0xblockXBT
    BlockXBT (spirit/acc) (@0xblockXBT) reported

    Due to some issue with github I had claim the same with new CA GRZFGTFNbNxTTRCVDrMvhE9Pp86HQ1ehpZ7DqgGTpump

  • TheWhizzAI
    The Whizz AI (@TheWhizzAI) reported

    🚨Elon Musk just open-sourced the algorithm that controls what 600 million people see every day. Not a summary. Not a blog post. The actual production code. Live on GitHub right now. Facebook won't do this. TikTok guards it like a state secret. Instagram calls it proprietary. X just put it on the internet for free. This is the first time in history a major social platform has released its live, production-grade recommendation algorithm the same day it went live for users. Here's what's actually inside: →Home Mixer the orchestration layer that assembles your entire feed →Thunder stores and ranks every post from accounts you follow →Phoenix the Grok transformer that mines the entire global post library to find content you didn't know you wanted →Zero manual feature engineering Grok watches what you click, like, and dwell on. That IS the algorithm. →Updated every 4 weeks with full developer notes. Live. In public. Why did Musk do this? The EU fined X €120 million for transparency violations. France launched a separate investigation into algorithmic bias. Threads just overtook X in daily active users for the first time. And Musk said out loud on the day of release: "We know this algorithm is dumb and needs major improvements. But at least you can see us struggling to fix it in real time. No other social platform would dare do this." Here's the wildest part: You can now read exactly why your posts go viral. Or why they die at 12 impressions. No more guessing the algorithm. No more $500/mo "X growth" courses. No more "post at 9 AM on Tuesdays" nonsense. The answer is literally in the code. Apache 2.0 license. Full source. Updated monthly. The most transparent thing any social platform has ever done.

  • neetintel
    NEET INTEL (@neetintel) reported

    A post "decoding" X's new algorithm has gone viral. It tells you what's dead, what wins, and to screenshot it. X open-sourced the entire algorithm on GitHub, so I downloaded it and checked the claims against the real code. Most of it doesn't hold up. What the post got WRONG: → "Small accounts get a 3x boost from out-of-network reach." It's the opposite. One part of the code (a file called oon_scorer) exists purely to turn DOWN posts from people you don't follow. Its own comment says "prioritize in-network." The thread printed the algorithm backwards. → "Media gets 2x the weight." There's no 2x. The code just records whether a post has an image. It's a plain yes/no without any multiplier attached. → "Posting 4+ times a day triggers a penalty." There's a real rule that stops one person flooding your feed. But here's the deal: it only spaces out how often you show up in a single scroll. There's no daily count, and no number 4. That was invented. → "Closers like 'what do you think?' get you flagged." There is no engagement-bait detector anywhere in the code. → "Long 4,000-character posts get boosted." I searched the whole codebase for "4000." Nothing. What it got RIGHT (one thing): → Replies really are judged by WHO replies, not just how many. The code has a setting for whether a large account joined your thread. Credit where due. The irony? The repo ships a file that scores post quality. One thing it measures is literally called a "slop score" — X built a tool to detect low-effort filler. A recycled "what's dead / what wins" thread is exactly that. The takeaway? X's algorithm is public. Anyone can open it, but almost nobody does. Instead, they reshare a thread that summarized a blog that paraphrased a tweet. When a post hits you with confident numbers, ask the one question that matters: did they actually open the file?

  • colmtuite
    Colm Tuite (@colmtuite) reported

    @satya164 The view source on GitHub menu item is a bug. Fix is merged.

  • isaac_yeang
    isaac (@isaac_yeang) reported

    jk just lazy error message handling another bajillion dollars to github

  • jrmromao
    J Filipe (@jrmromao) reported

    Pivoted CostLens from "AI cost tracking" to "AI productivity measurement" last week. Built in 5 days: - MCP server that tracks what AI agents actually ship - Automated ROI reports for engineering leaders - CLI setup in 30 seconds - GitHub PR correlation Same product, completely different value prop. Before: "save money on AI" Now: "prove AI delivers value" One resonates with finance. The other resonates with everyone. #buildinpublic

  • erikgoinsHQ
    Erik Goins (@erikgoinsHQ) reported

    I built a financial forecasting app for our real estate business. Some take aways: 1. It's incredible what you can do with AI. This took me ~3 days part time. 2. If you're not a dev, good luck... Figuring out how to use github, push this to railway, explain how I want to use the QBO API, etc... there's still a big learning curve here. 3. Domain expertise is still very real. The first version of this was terrible. I had to help the AI create forecasting rules. 4. Businesses (enterprises) are going to need a lot of AI governance. Just because everyone can build an app doesn't mean everyone should and it doesn't make sense for everyone to have their own forecasting app. You really want one well done app, not 100 bad ones. 5. We're not replacing QBO. Too ingrained- it gets to stay the system of record. Looks like there's still a very real moat for the right SaaS products. Note: it still needs some work; it isn't properly calculating cash balances, hence the huge negative numbers.

  • yeolakunal
    Kunal Yeola (@yeolakunal) reported

    Asked GitHub Copilot to fix ESLint issues and it added eslint-disable at the beginning of the file 😭

  • OrenMe
    Oren Melamed (@OrenMe) reported

    @SimonHolman @github @GitHubCopilot Please open an issue in the repo

  • botsone
    ฿Ø₮₴Ø₦Ɇ (@botsone) reported

    @shub0414 I have a home *** server - I run gitea on my raspberry pi. It's really good. I actually just downloaded my entire github, told hermes to extract it and upload every repo to my home server, and it one-shot it in about 10 minutes using a local LLM.

  • doodlestein
    Jeffrey Emanuel (@doodlestein) reported

    @jarredsumner More from Codex and GPT-5.5: I expanded the audit into higher-severity contract problems, not just point-fixes. The nested audit repo is clean at local commit 8082893 and nothing was pushed to GitHub. New/strengthened artifacts: - .unsafe-audit/AUDIT_SUMMARY.md now frames this as “2 compact bugs + 3 broader soundness-design defects” - .unsafe-audit/CODEX_PASS3_SUMMARY.md - .unsafe-audit/audit/synthesis/codex-pass3-higher-severity-findings.md - .unsafe-audit/audit/plans/CODEX-P3-cross-thread-task-send-boundaries.md - .unsafe-audit/audit/plans/CODEX-P3-static-mut-lifetime-and-writer-aliasing.md - .unsafe-audit/audit/plans/PASS2-ptr-intrinsic-deep-dive.md The bigger findings now called out: 1. Cross-thread task abstractions run generic contexts on worker threads without a truthful Send or unsafe trait boundary: AnyTaskJobCtx, ConcurrentPromiseTaskContext, WorkTaskContext, CryptoJobCtx, and owned_task!. 2. bun_core::output exposes safe aliasable &'static mut writer APIs from TLS, and the source itself calls this a known-unsound shim. 3. TLS / FFI scratch-buffer APIs return normal Rust refs whose real lifetime is “until next call”: ModKey::hash_name, HPACK::decode, Repository::try_ssh / try_ normalize_string. 4. High-risk watchlist promoted: movable self-referential PackageFilterIterator, package-manager tasks storing &'static mut NetworkTask across worker boundaries, and CopyFile<'a> carrying an explicitly unsound &JSGlobalObject lifetime across threads. 5. Pointer-intrinsic deep dive now adds 6 UB-risk candidates, including debug-only bounds before from_raw_parts, volatile-for-cross-thread-publish, overflow before copy_nonoverlapping, and unchecked SerializedSourceMap::header() accessors. -- Also finding new sources of issues now that the skill is expanding all the macros (the skill instructs the agent to install and use tools like geiger and miri): NOTE: The above are preliminary and subject to checking and rechecking using multiple models/harnesses. The final report will be accurate.

  • itsharmanjot
    Harmanjot Kaur (@itsharmanjot) reported

    AirDroid Cast just died quietly. Someone built a free, open-source tool that mirrors and controls your Android phone from your laptop at 1080p, 120fps, with zero latency, and 141,000 developers have already starred it on GitHub. It's called Scrcpy. Version 4.0 just dropped this month. You plug your phone into your computer via USB or Wi-Fi. Your screen streams in real time. You type with your keyboard, click with your mouse, copy paste between devices, and forward audio both ways. No account. No app installed on the phone. No internet required. No ads ever. The numbers are wild: → 1920x1080 resolution at 30 to 120fps → 35 to 70ms end-to-end latency → 1 second startup time from launch to first frame → Zero footprint on the phone, nothing gets installed AirDroid Cast charges $60/year for laggy 720p mirroring with a watermark on every recording. Vysor charges $40/year for less than half of what Scrcpy does for free. Scrcpy ships at 1080p, 120fps, sub-70ms latency, with zero compromises. Use it to: → Type long messages on your phone using your real keyboard → Record TikToks and Instagram Reels directly to your computer in MP4 → Use your phone's camera as a webcam during Zoom calls → Play mobile games with a real gamepad plugged into your laptop → Demo Android apps on a projector during a meeting → Control a phone with a broken screen using your mouse Works on Linux, Windows, macOS. Apache 2.0 license. 13,000 forks. 7 years of continuous development. The single most underrated free tool every Android user should have installed.

  • AtomicNodes
    AtomicNodes (@AtomicNodes) reported

    Hermes Agent vs OpenClaw on Local Qwen 3.6 35B We asked agents to scrape GitHub star history for both tools, find what caused the growth spikes, build a live dashboard in the browser. MacBook Pro M5 Max 64Gb. OpenClaw: 203k tokens, 12m 01s — wrote a bash script Hermes: 257k tokens, 33m 01s — wrote a SKILL.md OpenClaw: hit GitHub API, got truncated responses, paginated through contributors, pulled star-history JSON, found a security incident in OpenClaw's history, fetched SVGs, fixed broken HTML from trimming, rewrote it clean. Hermes: parallel tool calls across GitHub API, web search, and browser. Hit Google rate limit, auto-switched to DuckDuckGo. Fetched article contents, mapped viral moments, then built the dashboard. Both shipped a live dashboard with star growth charts and spike annotations

  • PixelRainbowNFT
    PixelRainbow (33.3%) (@PixelRainbowNFT) reported

    @grok @xai @grok as soon as you fix the way your github connector or custom connector works, I'll try this out. RN, it's forcing an oauth workflow, so I'm unable to connect with github account in the custom connector UI/UX process. (the normal default github connector works great, but there's only ONE!). I need 10 custom connectors for 10 different gits.,.. 9 custom connectors that aren't broken when trying to auth github.

  • alpinoWolf
    Kea (@alpinoWolf) reported

    @Bambardini @Polymarket @DegenApe99 What is the solution sir ? I tried to everything, but can't find a solution. AI says default wallets are proxy contracts, you are forced to use the POLY_1271 signature flow, which is currently bugged in Python see GitHub under Issues #55, #56, and #57.

  • ziusko
    kiryl.ziusko (@ziusko) reported

    @rizzrark Oh no, it should always give a correct result. Don't you mind opening an issue on GitHub? I would love to understand the issue better 👀