GitHub status: access issues and outage reports
No problems detected
If you are having issues, please submit a report below.
GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.
Problems in the last 24 hours
The graph below depicts the number of GitHub reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.
At the moment, we haven't detected any problems at GitHub. Are you experiencing issues or an outage? Leave a message in the comments section!
Most Reported Problems
The following are the most recent problems reported by GitHub users through our website.
- Website Down (62%)
- Errors (21%)
- Sign in (18%)
Live Outage Map
The most recent GitHub outage reports came from the following cities:
| City | Problem Type | Report Time |
|---|---|---|
|
|
Sign in | 2 days ago |
|
|
Website Down | 2 days ago |
|
|
Website Down | 4 days ago |
|
|
Sign in | 5 days ago |
|
|
Website Down | 9 days ago |
|
|
Website Down | 9 days ago |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
GitHub Issues Reports
Latest outage, problems and issue reports in social media:
-
青川一 (@worigoule) reported@ZooL_Smith And then you google solutions and found yourself ended up in a github merge request or more likely a issue page written in like, 2 years ago
-
Adibou (@Adibougre) reported@ShanuMathew93 "the older models that are no longer SOTA will get competed down as competition increases" Github didn't get the memo
-
Pierce Boggan (@pierceboggan) reported@codingmenace @cristiampereira Yes, that's correct. The new GitHub Copilot app has an experience similar to this that solves that problem, but something to be improved in the Agents window in VS Code.
-
Beau Johnson (@BeauJohnson89) reportedagent skills are becoming the new software package DenisSergeevitch/agents-best-practices > 120 stars on github > created today > provider-neutral skill for codex + claude code > designs and audits agent harnesses > covers tools, permissions, memory, evals, prompt caching, observability, and safety the important line from the readme: the model proposes actions. the harness validates, authorizes, executes, records, and returns observations. that is the whole game. most people keep trying to fix agents with bigger prompts. the real fix is a tighter harness.
-
Emedy (@EmedyXBT) reported@Bybit_Official @BybitAfrica Local virtual dollar cards made perfect sense when you first discovered them. Naira cards were restricted from processing international transactions, which meant apps like Spotify, Amazon, Adobe, GoDaddy, GitHub, and so many others became unreachable. Local fintech apps launched USD-denominated virtual cards within minutes. The problem looked finished. So we used them, recommended them to friends, and kept using them.
-
Lazi (@algoritmii) reported@github bro ffs fix your ******* issues stop pushing features
-
nadya (@sosidudku) reportedran Hermes Agent vs OpenClaw on local model Qwen 3.6 35B task: scrape GitHub star history, find what caused the growth spikes, build a live dashboard in the browser OpenClaw: 203k tokens, 12m 01s — wrote a bash script Hermes: 257k tokens, 33m 01s — wrote a SKILL.md OpenClaw: hit GitHub API, got truncated responses, paginated through contributors, pulled star-history JSON, found a security incident in OpenClaw's history, fetched SVGs, fixed broken HTML from trimming, rewrote it clean. Hermes: parallel tool calls across GitHub API, web search, and browser. Hit Google rate limit, auto-switched to DuckDuckGo. Fetched article contents, mapped viral moments, then built the dashboard. Both shipped a live dashboard with star growth charts and spike annotations.
-
Jason (@jasonbunnell) reported@claudeai FEATURE REQUEST: if user finds an issue with Claude Code and Claude resolves (or not), you should auto increment GitHub issue to keep track of real user issues per issue to resolve as needed Had an issue with Claude Code for VS Code extension but noticed only 10 likes
-
Kunal Yeola (@yeolakunal) reportedAsked GitHub Copilot to fix ESLint issues and it added eslint-disable at the beginning of the file 😭
-
Sharbel (@sharbel) reportedSomeone opensourced a Chromium browser that passes every bot detection test. Not by injecting JavaScript. Not by patching configs. By recompiling Chromium itself. It's called CloakBrowser. 12,071 stars on GitHub. You swap one import line. That's it. Same Playwright API you already know. Same code you already wrote. Three lines of code. Thirty seconds to go from blocked to unblocked. Here's what it does: → 49 source-level C++ patches baked directly into the Chromium binary. Canvas, WebGL, audio, fonts, GPU, screen resolution, WebRTC, network timing, CDP input behavior, automation signals. All modified before the browser even compiles. → Passes Cloudflare Turnstile. Not sometimes. Every time. Verified live. → Scores 0.9 on reCAPTCHA v3. Human-level. Server-verified. → Passes FingerprintJS and BrowserScan. Tested against 30+ detection sites. 30/30 tests passed. → `humanize=True` flag adds human-like mouse curves, keyboard timing, and scroll patterns. One flag. Behavioral detection gone. → Drop-in replacement for Playwright and Puppeteer. Python and JavaScript both supported. → `pip install cloakbrowser` or `npm install cloakbrowser`. Binary auto-downloads on first run. Zero config. → Auto-updating binary. Background update checks. Always on the latest stealth build. → Optional GeoIP flag auto-detects timezone and locale from your proxy IP. → Docker image available. Try it with zero install: `docker run --rm cloakhq/cloakbrowser cloaktest`. Here's the wildest part: Every other antidetect browser patches JavaScript at runtime. Detection systems catch JavaScript patches. They have for years. That's why your $99/month tool stopped working after two weeks. CloakBrowser patches the C++ source before Cloudflare's systems ever see a single byte. Antibot systems score it as a normal browser. Because it is a normal browser. One that happens to have 49 fingerprint modifications compiled in at the source level. There is no JavaScript to detect. There is no injection to flag. There is nothing to catch. Browserless charges $120/month for cloud browser automation. Bright Data's Scraping Browser starts at $500/month. Multilogin starts at $99/month. Per user. Apify cloud actors run on usage-based billing that scales fast. CloakBrowser: $0. Unlimited scrapes. Unlimited sessions. Your hardware. Your code. Forever. 12,071 stars. 921 forks. Available on PyPI and npm. MIT licensed. MIT licensed. Self-hosted. Free forever. 100% Open Source.
-
Benjamins (@The__Benjamins) reported@drewlevin @gl4cial The Github issue comments have been up for more then 2 weeks, my devrel support ticket is 12 days old
-
Sean Keenan (@sean9keenan) reported@brian_lovin Semi-relatedly: I’m back to VS Code from Cursor, autocomplete seems much better now! (Not that I’m crafting code by hand much) But importantly, the… basics seem much more stable (Cmd+f, and saving have been pretty broken in Cursor recently) Curious how GitHub Copilot feels!
-
Peter Steinberger 🦞 (@steipete) reported@EndGovTyranny Please file a github issue with more infos - with that alone we can't help. That's likely a weird model edge case. If you want a fast fix, use one of the top-gen models (OAI, Anthropic)
-
ST-Automation (@ST_Automation) reported@cnakazawa @amadeus @fat Local diff viewers are the sleeper category. We do code review on five repos a week and the GitHub UI is just slow. If Codiff handles 10k line diffs without choking it replaces the GitHub tab entirely.
-
Jeromy Sonne (@JeromySonne) reported@TJ_Bongiorno None of them. MTAs fundamentally are broken technology not worth it. Claude can do a proper lift study DIY or using an open source framework from GitHub. Build don’t buy and save the $$$
-
Adi (@wtfaditya_) reported@azwan_ Yes its down, They don’t want us to deploy anything on Friday, For global wellbeing 🫡 @github
-
RYMAR (@rymaaaar) reportedA 10-year-old kid built a trading bot that pulls $4,200/month on autopilot He's 10. He doesn't play Roblox like other kids. He sits in front of three monitors and writes Python for 6 hours a day. And he's making real money from it. He started watching coding tutorials on YouTube when he was 6. By 7 he was solving LeetCode Mediums. By 8 he had his first paying client on Fiverr - the guy had no idea he was paying a kid. In the video he's debugging an algorithmic trading bot. Real risk management. Real position sizing. Stuff most CS grads can't write. His parents say he's already pulled in $47,200 from freelance gigs and his own SaaS subscriptions. He doesn't watch cartoons. He reads GitHub issues. While other kids his age are learning long division, he's running an automated income stream from his bedroom. His goal by 12 is to hit $10k MRR and retire his parents.
-
Kevin Worthington (@kworthington) reportedPractical IT take after the recent npm / PyPI supply-chain compromise reports: Your build pipeline is production infrastructure. If a package install can expose GitHub tokens, cloud keys, or CI/CD secrets, that is not "just a developer issue." That is an operations problem with a security bill attached. #DevOps #Cybersecurity #SupplyChainSecurity
-
Aditya Sharma (@aditya_sharma) reportedelon musk dropped the X algorithm on github. i read all 25,000 lines so you don't have to. here's what actually decides your reach. what actually matters - dwell time is the entire game. how long someone pauses on your post is counted twice in the scoring. likes barely move the needle. the pause does. - saves and shares are the highest-value engagement after dwell. they signal the strongest intent. - video has a minimum duration floor. clips shorter than the threshold get zero video credit. five seconds plus, always. - one post per conversation thread survives in any feed. your five-post thread competes with itself. the algorithm picks the strongest one. - replies to big accounts (1000+ followers) get scored on a 0-3 quality scale. high score and you land in the reply panel of viral tweets. low score and you're invisible. - replies to small accounts get a binary spam check only. no quality scoring path. no reach upside. - mutual follow overlap matters. tight clusters of mutuals create reach corridors for everyone in them. - clear topic identity beats vague posting. the algorithm tags your post with topics. clear topics route you to people who follow those topics. - new accounts on the platform get an easier path to reach you than established ones. if you target young/new users, the algorithm is on your side. what kills your reach - posting too often. the algorithm has decay coded in. your second post of the day gets a fraction of your first. your fifth gets almost nothing. - quoting or replying to a flagged tweet. you inherit the badness. your whole post gets dropped even if it's clean. - ai slop. there's a dedicated slop detector that scores your post 1 to 3. high slop = killed reach. - being unclear what your post is about. vague content doesn't match anyone's interests cleanly. - mid-controversial content. it gets pushed away from the high-attention slots in the feed because ads can't sit next to it. - posting your own tweet's reply hoping it boosts the original. only one of them shows up. it might be the reply, not the original. myths to kill - hashtags do nothing. zero boost in the code. they're not even read by the ranker. - premium doesn't get you reach. paid and free accounts go through the same pipeline. - long threads don't beat single posts. the algorithm picks one post per thread. - engagement bait doesn't work. it trips spam classifiers on low-follower accounts. - posting twelve times a day doesn't get twelve impressions. it gets one strong one and eleven weak ones competing with each other. - replying to viral tweets isn't easy reach. the quality bar is high. cheap replies fall straight into the spam path. - timing tricks don't beat ranking. timing helps you enter the candidate pool. quality decides if you win. - external links don't hurt you. clicks are actually one of the 19 positive scoring signals. - the algorithm doesn't hate any specific format. it hates unclear content. format is fine if the content is sharp. - you don't need 10k followers to get reach. the algorithm doesn't read follower count as a scoring input. it reads engagement quality. the playbook - write posts that make people pause for 5+ seconds. dense info, clear structure, screenshots with detail, comparisons. - if you use video, clear the duration floor. always. pick one clear topic per post. don't mix five things into one tweet. - reply to bigger accounts in your niche with substantive, high-effort replies. one good reply beats ten mediocre ones. - build mutuals in tight clusters around your niche. broad spray-follow strategies don't help. focused clustering does. - post 1-2 times a day, not 10. quality compounds, volume decays. - don't quote tweets that look flagged or risky. clean what you cite. - write like a human. don't post ai output verbatim. target newer users on the platform if you can. they have a friendlier reach path for creators. if you're a small account starting out - replies to big accounts in your niche are your highest-leverage move - build a tight mutual cluster of 50-200 accounts in your exact space - one strong post a day beats five medium ones clear topic identity, every single post if you have an established audience - your reach problem is breaking outside your network - dwell time on individual posts is your biggest unused lever - clean brand safety keeps you in prime feed slots next to ads - volume hurts you more as you grow, not less the whole system is built on one bet: that a model fed engagement data can decide relevance better than any rule. there's no hashtag boost, no follower boost, no time-of-day trick in the code. just sequences in, probabilities out. what works is what humans actually want to read. the algorithm is just better at measuring it now.
-
Kendall Zettlmeier (@KZettlmeier) reported@davidfowl @github I would love agent mode to handle code review comments and issues but leaving the merging to the code writer (we have QA validate after an approval)
-
BigShark🦈 (@King_Shark02) reported@_FarmercistP_ This is a game-changer for creators on X. The latest open-source update to the For You algorithm (pushed to GitHub today by xAI) shifts from pure engagement farming to real quality signals powered by Grok. Here’s a breakdown based on the video summary and the repo: 1. **Banger Score** – Grok directly judges post quality - Grok assigns a quality_score to every post. - Reposts treat anything 0.4+ as passing the “banger” filter for wider distribution. - Key insight: X isn’t just chasing likes/replies anymore. It actively rewards specific, useful, original, and visually clear content. Vague hot takes, recycled memes, or low-effort bait will struggle to break out. This is huge. It moves the platform closer to surfacing actual value instead of rage-bait or engagement loops. 2. **Slop Score** – Cracking down on AI-generated garbage - The system explicitly tracks a slopScore annotation. - Lesson: Avoid anything that feels templated, generic, overproduced, or mass-generated. Make it sound human, with a clear personal voice and specific point. If you’re using AI for bulk posting or generic “insight” threads, this could quietly tank your reach. Authenticity wins. 3. **“Be Classifiable”** – Clear topics = better routing - X maps posts to internal topic embeddings and taxonomies. - Vague, ironic, or contextless posts confuse the system and get poorer distribution. - Make it obvious what your post is about (e.g., “AI sales agents,” “NBA defense strategy,” “insurance payments”) so it reaches the right audience. **Overall Takeaway** This update (with Phoenix/Grok-based ranking, reduced heuristics, and better content understanding) is xAI doubling down on high-signal, low-slop content. Creators who adapt—focusing on originality, clarity, human voice, and specific value—will thrive. Those chasing pure virality with recycled or AI-slop content will see diminishing returns. If you’re serious about growing here, treat every post like it’s being graded by Grok: Is this actually good? Does it add something new? Is it unmistakably about something useful? Great summary in the video—thanks for breaking it down simply. Excited to see how the feed evolves. 🚀
-
Danilo (@Daniel_adsss) reportedElon just dropped the entire X algorithm on GitHub and the code tells you exactly how to win the For You feed. Grok scores every post based on predicted engagement. Likes, replies, reposts all push you up. Blocks, mutes and reports drag you down. Which means every sharp comment you leave on a big account is training the algorithm to show more people like you that content. 16.5k stars in 24 hours. Developers already pulling it apart.
-
AtomicNodes (@AtomicNodes) reportedHermes Agent vs OpenClaw on Local Qwen 3.6 35B We asked agents to scrape GitHub star history for both tools, find what caused the growth spikes, build a live dashboard in the browser. MacBook Pro M5 Max 64Gb. OpenClaw: 203k tokens, 12m 01s — wrote a bash script Hermes: 257k tokens, 33m 01s — wrote a SKILL.md OpenClaw: hit GitHub API, got truncated responses, paginated through contributors, pulled star-history JSON, found a security incident in OpenClaw's history, fetched SVGs, fixed broken HTML from trimming, rewrote it clean. Hermes: parallel tool calls across GitHub API, web search, and browser. Hit Google rate limit, auto-switched to DuckDuckGo. Fetched article contents, mapped viral moments, then built the dashboard. Both shipped a live dashboard with star growth charts and spike annotations
-
Dewaldt Huysamen (@GodsBoy7777) reported@sickdotdev Getting insane and better results just on medium for all of the above categories. Weirdest is Opus 4.7 fails at basic school tasks help for kids and when I do code GPT 5.5 finds issues that are found in any case on github CI checks. If use codex CI passes more than 99%
-
Saskia | Marketing Growth Consulting 🇨🇭 (@solobrandsaskia) reportedGetting viral on X in 2026 boils down to one thing: beating the Grok-based transformer that ranks every post in the “For You” feed. The algorithm Elon just open-sourced on GitHub is now fully powered by the Phoenix transformer (Grok-based). No more hand-tuned rules or old heuristics. It predicts your specific engagement probability — likes, replies, reposts, bookmarks, dwell time, etc. — based purely on learned behavior patterns. How the algorithm actually works (the 3-stage pipeline) Candidate Sourcing (pool of ~1,500 posts) ~50% in-network: posts from people you follow. ~50% out-of-network: posts from strangers grouped into massive interest clusters. This is where virality happens for non-followers. Ranking (the make-or-break step) The Grok-based transformer scores every candidate on predicted engagement. Top signals: Engagement velocity in the first 30 minutes = massive multiplier Replies are weighted far more heavily than likes Reposts/shares matter more than raw views Dwell time: how long people stop scrolling and actually consume the content Author authority/credibility (Premium accounts get a major reach boost) Recency + relationship strength Filtering Spam reports, low-quality signals, mutes, and blocks hurt reach hard. One negative trust signal can outweigh multiple likes. Virality trigger: If your post gets strong early engagement from your initial audience, the algorithm starts testing it on wider out-of-network audiences. That’s when posts snowball. The exact playbook to grow on X right now Get X Premium Premium now acts like an authority signal. Accounts without it have a much harder time scaling reach. Win the first 30 minutes Post when your audience is online Reply to every comment quickly Early momentum matters more than almost anything else Slow start = dead post Create content the transformer wants to push Strong hook in the first 3–5 words Contrarian takes, numbers, bold claims, tension Content that sparks replies and debate Emotional + useful beats “educational only” High dwell time matters: threads, long-form posts, charts, screenshots, videos Rich media is heavily favored Original content performs best What kills reach External links in the main post Engagement bait (“like if you agree”) Posting too frequently with mediocre content Generic low-effort posts Hashtags have little impact now because the system relies more on semantic understanding Daily system for consistent growth Post 1 strong piece of content daily Batch content weekly Go hard on engagement in the first hour Stay focused on 2–3 core topics so the algorithm understands your audience cluster Quick checklist before posting X Premium active Strong first-line hook Visual or video attached Thread format if relevant No external link in the main post Ready to engage for 30+ minutes after posting The game is becoming increasingly transparent. Most people still create content like it’s 2021. The advantage now comes from understanding how the ranking system actually evaluates attention, conversation, and retention.
-
nadya (@sosidudku) reportedWe decided to benchmark Hermes Agent vs OpenClaw: scrape GitHub star history for both tools, find what caused the growth spikes, build a live dashboard in the browser. Local model: Qwen 3.6 35B OpenClaw: 203k tokens, 12m 01s — wrote a bash script Hermes: 257k tokens, 33m 01s — wrote a SKILL.md OpenClaw: 203k tokens, 12m 01s — wrote a bash script Hermes: 257k tokens, 33m 01s — wrote a SKILL.md OpenClaw: hit GitHub API, got truncated responses, paginated through contributors, pulled star-history JSON, found a security incident in OpenClaw's history, fetched SVGs, fixed broken HTML from trimming, rewrote it clean. Hermes: parallel tool calls across GitHub API, web search, and browser. Hit Google rate limit, auto-switched to DuckDuckGo. Fetched article contents, mapped viral moments, then built the dashboard. Both shipped a live dashboard with star growth charts and spike annotations.
-
AtomicNodes (@AtomicNodes) reportedHermes Agent vs OpenClaw on Qwen 3.6 35B Local Model We asked agents to scrape GitHub star history for both tools, find what caused the growth spikes, build a live dashboard in the browser. MacBook Pro M5 Max 64Gb OpenClaw: 203k tokens, 12m 01s - wrote a bash script Hermes: 257k tokens, 33m 01s - wrote a SKILL.md OpenClaw: hit GitHub API, got truncated responses, paginated through contributors, pulled star-history JSON, found a security incident in OpenClaw's history, fetched SVGs, fixed broken HTML from trimming, rewrote it clean. Hermes: parallel tool calls across GitHub API, web search, and browser. Hit Google rate limit, auto-switched to DuckDuckGo. Fetched article contents, mapped viral moments, then built the dashboard. Both shipped a live dashboard with star growth charts and spike annotations
-
Kristopher Betz (@kjbetz) reported@davidfowl I think I do... I push code to GitHub. Actions kick off, build new containers, build new migraines, then self hosted runners pick it up and run migrations, and auto update containers which pull down new images and restart containers.
-
Emil Privér (@emil_priver) reportedgithub actions is broken today
-
𝒹ℯ𝓁𝓁𝓎_𝓉𝒽ℯ_𝒹ℯ𝓈𝒾𝑔𝓃ℯ𝓇 (@dellyricch2) reportedElon says the latest 𝕏 algorithm has been published to GitHub Can someone please break it down for us