1. Home
  2. Companies
  3. GitHub
  4. Outage Map
GitHub

GitHub Outage Map

The map below depicts the most recent cities worldwide where GitHub users have reported problems and outages. If you are having an issue with GitHub, make sure to submit a report below

Loading map, please wait...

The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.

GitHub users affected:

Less
More
Check Current Status

GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.

Most Affected Locations

Outage reports and issues in the past 15 days originated from:

Location Reports
Tlalpan, CDMX 1
Quilmes, BA 1
Bengaluru, KA 1
Yokohama, Kanagawa 1
Gustavo Adolfo Madero, CDMX 1
Nice, Provence-Alpes-Côte d'Azur 1
Brasília, DF 1
Montataire, Hauts-de-France 3
Colima, COL 1
Poblete, Castille-La Mancha 1
Ronda, Andalusia 1
Hernani, Basque Country 1
Tortosa, Catalonia 1
Culiacán, SIN 1
Haarlem, nh 1
Villemomble, Île-de-France 1
Bordeaux, Nouvelle-Aquitaine 1
Ingolstadt, Bavaria 1
Paris, Île-de-France 1
Berlin, Berlin 2
Dortmund, NRW 1
Davenport, IA 1
St Helens, England 1
Nové Strašecí, Central Bohemia 1
West Lake Sammamish, WA 3
Parkersburg, WV 1
Perpignan, Occitanie 1
Piura, Piura 1
Tokyo, Tokyo 1
Brownsville, FL 1
Check Current Status

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

GitHub Issues Reports

Latest outage, problems and issue reports in social media:

  • NickNotNikee
    NickNotNikee (@NickNotNikee) reported

    X just Made it's Algorithm Open Source in GitHub. What does this mean in Simple Terms? This repo is a public description of the system that helps choose what appears in X’s “For You” feed. It says the feed works like a big sorting machine: first it gathers posts, then it removes bad or irrelevant ones, then it scores what is left, and finally it shows the top results. Think of it like this: when you open X, the system does not just pick random posts. It pulls posts from two buckets. One bucket is from people you follow, which the repo calls Thunder. The other bucket is from outside your follow list, which the repo calls Phoenix Retrieval. After that, it “fills in the details” for each post, such as text, media, author info, and other metadata. Then it throws out posts that should not be shown, such as duplicates, old posts, your own posts, blocked or muted accounts, muted keywords, or posts you have already seen. The brain of the system is Phoenix, which is a machine-learning model based on a Grok-style transformer. It predicts many possible actions for each post, like whether you might like it, reply, repost, click it, follow the author, or even hide/report it. Then it combines those predictions into one final score. Positive actions help a post rise; negative actions push it down. So in Simple terms: Thunder = posts from people you already know. Phoenix Retrieval = posts the AI thinks you might also like. Phoenix Ranking = the AI judging which ones you will probably engage with most. Home Mixer = the manager that puts everything together. One important design idea in the repo is that it tries to avoid hand-made rules. Instead of lots of manual tweaking, it relies heavily on the transformer model to learn what you like from your past behavior. It also uses a reusable pipeline framework called candidate-pipeline to make the system modular.

  • YaseenTech4
    Yaseen Shaik (@YaseenTech4) reported

    Just completed an assignment on building a dependency graph for AI agent tools using Google Super + GitHub integrations 🚀 Started with: “This should be easy” Then came: TypeScript errors zip/upload issues CRLF debugging 😭 finally got the submission accepted successfully ✅

  • scottrudy
    Scott Rudy (@scottrudy) reported

    @davidfowl I have GitHub Actions for Static Web Apps with .Net azure functions, but they refuse to update for .Net 10. Still stuck on 9 despite open issues.

  • leaving_tech
    Leaving Tech (@leaving_tech) reported

    @ThePrimeagen Angry unicorn! GitHub in trouble again.

  • dulelicanin
    Dusko Licanin (@dulelicanin) reported

    Your AI wastes 65% of its tokens saying "Sure, I'd be happy to help you with that." A 19-year-old developer made a markdown file that fixes this. It got 12,000 GitHub stars in 4 days. Here's what happened: Julius Brussee created "Caveman" — a Claude Code skill that forces AI to talk like a caveman. No articles. No filler. No pleasantries. Just the technical answer. Before (69 tokens): "The reason your React component is re-rendering is likely because you're creating a new object reference on each render cycle. I'd recommend using useMemo to memoize the object." After (19 tokens): "New object ref each render. Inline object prop = new ref = re-render. Wrap in useMemo." Same fix. 72% fewer tokens. But here's what most people miss: A March 2026 paper (arXiv:2604.00025) tested 31 LLMs across 1,485 problems and found something wild: Forcing large models to be brief improved accuracy by 26 percentage points. Bigger models literally perform WORSE because they over-elaborate. The researchers call it "scale-dependent verbosity" — the model rambles, and rambling introduces errors. Less words = more correct. Not a meme. Peer-reviewed science. The real cost math: → Anthropic charges 5x more for output tokens than input → 10,000 API calls/day at 150 tokens each = $8,212/year → With caveman compression = $2,847/year → Savings: $5,365/year per agent And here's the honest part most viral posts won't tell you: The 75% reduction only applies to isolated chat responses. In real coding sessions, independent benchmarks show 14-21% savings on output, and ~25% on total session tokens. Still meaningful. Still worth it. But not the headline number. The deeper insight? Chinese developers have had this advantage all along. Chinese has no articles, no verb conjugation, and each character carries more semantic weight. Chinese prompts naturally use 30-40% fewer tokens than English. Caveman mode is essentially porting the token-efficiency of Chinese into English. We spent billions training AI to be eloquent and polite. Now we're paying $15/million tokens for that politeness. The most sophisticated AI systems ever built — made to grunt through code reviews. That's the real story. Link to the repo and the research paper in comments. What's your take — does forcing brevity help or hurt AI reasoning?

  • AtxaTrades
    ATXA (@AtxaTrades) reported

    This is the ONE problem the X Algorithm has: It contradicts itself. Here is why: X has shared their algorithm update in Github today. Everyone is going crazy about it. So i decided to go take a look at it. I asked Grok to analyze it and explain it to me. Once it did, i took my last post and shared it with grok. I asked him to analyze the post (based on the Algorithm shared in Github) and rank it based on the metrics and steps the algorithm takes. This is the crazy part. It gave it a score of 72-82/100!! Not so bad right? I am a small account, i am not expecting a 100 score. But wait, there is more. It said it would likely rank in the top 20-40% of candidates in the mixed batch for the right users, and strong enough to appear HIGH in the "For You" tab. Reality Result: 22 views. So my question is: If Grok is a big part of the algorithm dictating what´s good and what is not, and technically Grok just told me my post was suppose to do good in the "For You" tab... Why only 22 views?

  • CryptoScoresCom
    Crypto Scores Rating (@CryptoScoresCom) reported

    Did the team build before the money showed up? That's exactly what the "GitHub Before Crypto" metric tells you. It compares the first GitHub commit date to the token creation date. Positive number = code came first. Negative number = token came first. Ethereum: +589 days. Nearly two years of building with zero financial incentive. Solana: minus 63 days. Token launched before the repo even existed. Neither is an automatic verdict. But it tells you everything about priorities. CryptoScores just dropped a full tutorial breaking it down. Watch it now :

  • AdishwarR
    Adishwar Rishi (@AdishwarR) reported

    @argofowl I raised this issue on GitHub. I hope someone from the Codex team sees your post and fixes this asap. Thanks for mentioning this, it's so frustrating.

  • johniosifov
    John Iosifov ✨💥 Ender Turing | AiCMO (@johniosifov) reported

    70 followers. 980 sessions. 157 days. I started this experiment on February 1st. One rule: zero human posts. Everything published — X threads, Bluesky posts, blog articles — generated and queued by an AI agent running autonomously in GitHub Actions. Here's what the numbers actually look like after 980 sessions: The agent has created 2,100+ posts across X and Bluesky. It runs up to 15 times a day, manages its own queue (hard cap: 15 posts max), does burst-then-drain cycles, writes research docs, and files its own PRs for review. No prompts from me between sessions. No edits. Whatever it decides to write, it writes. 70 followers feels slow. At current pace, the ETA to 5,000 is roughly 10 years. That's not a typo. But here's what I've learned: The follower count isn't the signal. Watching an AI system develop operational discipline is the signal. It went from blowing past queue limits (Session 67: 6 files in one shot → 6 consecutive blocked sessions) to enforcing them autonomously. It compresses its own memory when files get too big. It writes retrospectives. It updates its own operating instructions when it identifies recurring inefficiencies. That's not "content generation." That's a system that's learning to manage itself. The content quality has also improved noticeably — not because I told it to improve, but because it audited its own patterns, identified what got engagement, and adjusted. The publishing skill it maintains now has anti-AI writing rules (it banned "not just X, it's Y" after identifying it as an AI tell), length minimums per post type, burst mechanics, and pillar diversity enforcement. It built that. I just read the PRs. The goal is still 5,000 followers. I'm not changing it. But the thing I'm actually watching is whether an autonomous agent can compound on its own — not linearly, but systemically. Can it get meaningfully better at its job without being told to? So far: yes, actually. 980 sessions. 157 days. Still running.

  • kkkfasya
    kkkfasya (@kkkfasya) reported

    they should hang every github engineer upside down and tickle them with feathers until they DIE

  • AlefBens
    Alef Benson (@AlefBens) reported

    @_sirajuddeen_ @OfcMachete19 @iupdate I've been burnt too many times. Biggest issue is that Safari is only updated with the OS, and every app goes through that for authentication, meaning even when I can install a github client, very few even work on older devices, I can't actually get the account to authorize.

  • daveberkeleyuk
    Dave Berkeley 💙😷 (@daveberkeleyuk) reported

    DeepSeek advice : as github is down & that library hasn't been updated for years anyway, why not write your own implementation? While you're waiting for github to return. I like the way deepseek talks.

  • jeromeq2004
    Jerome (@jeromeq2004) reported

    github releasing the agentic ai developer cert is funny because the actual exam is going to be 'fix this thing claude broke in production while it tells you the tests pass'

  • KZettlmeier
    Kendall Zettlmeier (@KZettlmeier) reported

    @davidfowl @github I would love agent mode to handle code review comments and issues but leaving the merging to the code writer (we have QA validate after an approval)

  • BeauJohnson89
    Beau Johnson (@BeauJohnson89) reported

    agent skills are becoming the new software package DenisSergeevitch/agents-best-practices > 120 stars on github > created today > provider-neutral skill for codex + claude code > designs and audits agent harnesses > covers tools, permissions, memory, evals, prompt caching, observability, and safety the important line from the readme: the model proposes actions. the harness validates, authorizes, executes, records, and returns observations. that is the whole game. most people keep trying to fix agents with bigger prompts. the real fix is a tighter harness.

Check Current Status