1. Home
  2. Companies
  3. GitHub
  4. Outage Map
GitHub

GitHub Outage Map

The map below depicts the most recent cities worldwide where GitHub users have reported problems and outages. If you are having an issue with GitHub, make sure to submit a report below

Loading map, please wait...

The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.

GitHub users affected:

Less
More
Check Current Status

GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.

Most Affected Locations

Outage reports and issues in the past 15 days originated from:

Location Reports
Paris, Île-de-France 1
Berlin, Berlin 2
Dortmund, NRW 1
Davenport, IA 1
St Helens, England 1
Nové Strašecí, Central Bohemia 1
West Lake Sammamish, WA 3
Parkersburg, WV 1
Perpignan, Occitanie 1
Piura, Piura 1
Tokyo, Tokyo 1
Brownsville, FL 1
New Delhi, NCT 1
Kannur, KL 1
Newark, NJ 1
Raszyn, Mazovia 1
Trichūr, KL 1
Departamento de Capital, MZ 1
Chão de Cevada, Faro 1
New York City, NY 1
León de los Aldama, GUA 1
Quito, Pichincha 1
Belfast, Northern Ireland 1
Guayaquil, Guayas 1
Irvington, NJ 1
Check Current Status

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

GitHub Issues Reports

Latest outage, problems and issue reports in social media:

  • paytkaleiwahea
    Payton (@paytkaleiwahea) reported

    Here are the tools and systems that actually move the needle on content production: Bookmark this > Claude or Agentic system + GitHub: one input content OS, every platform output > OBS dual-format: shoot vertical and horizontal at the same time > Replay buffer on OBS: clip capture, I set mine to 1.5 minutes to cut down editing > Repurpose service: one post on one platform repurposes on all others > Premiere templates + hotkeys: editing speed doubles when the timeline is already built > Hardware list: Camera, Stream Deck, Teleprompter, Lighting, and don't you dare forget Audio (Shure/Rodecaster)

  • donpark
    Don Park (@donpark) reported

    @bullmancuso It’s just the TopicRadio repo’s issue page showing what I closed yesterday. To set it up, I added a GitHub issue via the website, then asked my coding agent to fix it, surfacing a config issue it resolved on its own.

  • NieRFan999
    NF99 (@NieRFan999) reported

    @_fukayuki_ @06sixx That's the thing though. No official materials are distributed. Only server code is being shared. Check the GitHub. No official Square Enix assets are bring shared. The server was created independently and originally.

  • grippysockdev
    patrick mahloy (@grippysockdev) reported

    Who has exp contributing to OSS? Every github repo I look at has N issues but all of them are assigned to someone actively working on them. Not sure I have the bandwidth to watch repos so closely...

  • Coherent_Design
    The Structural Architect ⚡ (@Coherent_Design) reported

    After digging way too deep into GitHub Copilot Pro vs Pro+: Pro+ does not appear to meaningfully solve the real pain point for heavy VS Code users: the short-term / session-level throttling where Copilot suddenly stops mid-task, truncates, or “continue” barely works. What it does seem to do: - more monthly premium requests - fuller model access - some evidence of slightly higher model-specific limits / priority What it does not seem to do: - eliminate mid-task stoppage - prevent active agent sessions from choking under load - turn Copilot into a no-throttle coding agent So the honest conclusion is: Pro+ raises the ceiling a bit, but it does not remove the wall. The best practical mitigations still look like: - Auto model selection - one agent at a time - use frontier models for hard reasoning, not long grind sessions - use base models for sustained editing Feels like the real problem is backend/service-level throttling, not the monthly quota. Anyone else seeing the same thing in VS Code?

  • Rinnegatamante
    Rinnegatamante (@Rinnegatamante) reported

    @ulrich5000 Try to get the v.1.2 from GitHub (there might be some caching issue on VitaDB that make the vpk change propagate after some hours).

  • AfterThe925
    NiNE (@AfterThe925) reported

    Two weeks ago, deploying an AI agent took a weekend and a GitHub degree. Now: dashboard, click, running. Anthropic handles sandboxing, retries, auth. Platforms handle hosting, integrations, memory. The infrastructure layer is being commoditized in real time. Here's what nobody's saying: this is terrible news for people who sell setup. And great news for everyone else. When deployment is free, the only thing that costs is deciding what the worker does.

  • PsudoMike
    PsudoMike 🇨🇦 (@PsudoMike) reported

    @github Triage is exactly where accessibility falls apart at most orgs. Too slow, too manual. By the time a fix ships, context is gone. AI keeping that loop tight is smart. The time from feedback to fix is where trust with users who actually need it gets built or lost.

  • jimmy_toan
    Jimmy (@jimmy_toan) reported

    Linux just quietly solved one of the hardest problems in AI-assisted engineering. And nobody framed it that way. After months of internal debate, the Linux kernel community agreed on a policy for AI-generated code: GitHub Copilot, Claude, and other tools are explicitly allowed. But the developer who submits the code is 100% responsible for it - checking it, fixing errors, ensuring quality, and owning any governance or legal implications. The phrase from the announcement: "Humans take the fall for mistakes." That's not a slogan. That's an accountability architecture. Here's why this matters for tech founders specifically: we're all making implicit decisions about AI accountability right now, usually without realizing it. 🧵 The question isn't whether your team uses AI to write code. They do, or they will. The question is: who is accountable when it's wrong? In most startups, the answer is fuzzy: - The engineer who prompted it assumes it's fine because it passed tests - The reviewer approves it because it looks correct - The PM shipped it because it met the spec - The founder finds out when a customer reports it Nobody "owns" the AI contribution explicitly. Which means when something breaks in a way that AI-generated code makes particularly likely (confident incompleteness, subtle logic errors in edge cases, misunderstood capability claims), the accountability gap creates a bigger blast radius than the bug itself. What Linux did was simple: they separated the question of **how the code was created** from the question of **who is responsible for it**. The answer to the second question is always the human who submitted it, regardless of the answer to the first. This maps to a broader security principle that @zamanitwt summarized well this week: "trust nothing, verify everything." That's not just a network security policy. Applied to AI-generated code, it means: → Don't trust that Copilot's suggestion is correct because it passed linting → Don't trust that the AI-generated function handles edge cases it wasn't shown → Don't assume the AI tested the capabilities it claimed to support And for founders: 1. **Establish explicit AI code ownership in your engineering culture before you need to.** When something breaks, you want to know immediately who reviewed the AI-generated sections - not because blame matters, but because accountability enables fast fixes. 2. **Zero-trust for AI outputs is not paranoia - it's good engineering.** Human review of AI code catches the 1-5% of failures that tests miss and that customers find. 3. **The liability question is coming for AI-generated code.** Linux addressed it proactively. Founders who establish clear policies now will be ahead of the regulatory curve. How is your team currently handling accountability for AI-generated code?

  • emmanuelao_
    Emmanuel AO - The DevOps Fixer 🐧 (@emmanuelao_) reported

    Certificates don't make you an engineer. Shipping broken things and fixing them does. Your GitHub is a better CV than your Coursera.

  • abdonrd
    Abdón Rodríguez (@abdonrd) reported

    @timneutkens @jespertwitties Do you have a GitHub issue for this? I can't find it, and I ran into the same problem updating from v16.1.6 to 16.2.3 on a self-hosted Docker setup.

  • dodgelander
    dod (@dodgelander) reported

    @SuperClawPaul @dwlz how about an amd senior engineer on github issues

  • abdonrd
    Abdón Rodríguez (@abdonrd) reported

    @themcmxciv Do you have a GitHub issue for this? I can't find it, and I ran into the same problem updating from v16.1.6 to 16.2.3 on a self-hosted Docker setup.

  • s_cintioli_
    Stefano Cintioli (@s_cintioli_) reported

    Before credits ran out I had 13 slides in Next.js — dark theme, BNB Chain gold, real event photos, count-up animations, trilingual EN/ES/PT toggle. Then v0 stopped mid-fix. Broken logo. No way to continue inside v0. So I just... downloaded the export zip and moved it to GitHub.

  • GranvilleChri10
    Granville Christopher (@GranvilleChri10) reported

    @Railway I’m unable to log into my account. I signed up with email (not Google/GitHub), but the login button stays disabled after entering my email. Tried different browsers & incognito — still not working. Please help. @railway_status

Check Current Status