1. Home
  2. Companies
  3. Dropbox
  4. Outage Map
Dropbox

Dropbox Outage Map

The map below depicts the most recent cities worldwide where Dropbox users have reported problems and outages. If you are having an issue with Dropbox, make sure to submit a report below

Loading map, please wait...

The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.

Dropbox users affected:

Less
More
Check Current Status

Dropbox is a file hosting service operated by American company Dropbox, Inc., headquartered in San Francisco, California, that offers cloud storage, file synchronization, personal cloud, and client software.

Most Affected Locations

Outage reports and issues in the past 15 days originated from:

Location Reports
Salt Lake City, UT 1
Madrid, Madrid 1
Conneaut, OH 1
City of London, England 1
Check Current Status

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

Dropbox Issues Reports

Latest outage, problems and issue reports in social media:

  • P0tofSn33d
    ᴾᵒᵗ ᵒᶠ ˢⁿᵉᵉᵈ (@P0tofSn33d) reported

    @Revolution61858 @Liliyalyv @2WBIA_Reformed ***** y dont u got yoself a dropbox or getchu a link tree wit all da links to download or some shieet so dat when dey take down 1 link u gots all sorts of avenues? Hustler Mindset *****.

  • Wisemenmentors
    The Wisemen Alpha (@Wisemenmentors) reported

    Told properly for the first time @toly. Soviet Ukraine. 13 years at Qualcomm. Dropbox. 4 AM at Cafe Sole with two coffees and a beer. The moment you realized the problem wasn't consensus, it was time itself.

  • NoFrankingWay
    Frankly Frank (@NoFrankingWay) reported

    @ElSotong Little of each. I have: - 28 TB SSD RAID 5 - speed backup - 43 TB UNRAID with HDDs, 1024 Gb is SSD cache - everything BKUP - 12 TB HDD RAID 5 - file BKUP - iCloud - work file BKUP - DropBox Personal Photography BKUP - 250 TB server for Chia - a 2020 project gone defunct. So now I have hella storage

  • nathan_j_morton
    njm ⚡️🏴󠁧󠁢󠁥󠁮󠁧󠁿🏴󠁧󠁢󠁳󠁣󠁴󠁿🏴󠁧󠁢󠁷󠁬󠁳󠁿⚡️ (@nathan_j_morton) reported

    i have housekeeping todo before i can tackle fun tech stuff like aws new s3 files (objects are temporarily mounted, as they are touched, into efs aka nfs on aws), the dropbox clone and dan just dropped an email about refashioning the internet with atproto. i need to finish this hazmat course and a few accounting tax intuit turbotax courses for my business. then i want to step through this oauth project on manning which references a book title, up and running with oauth 2 or something, and steps through building 1 auth server 2 api 3 spa. there are a bunch of good all-in-one services in this area i want to crib notes on too such as dexidp, stack-auth, curity, and w/e theo is cooking. he likes better-auth iirc.

  • dachswerk
    Dachswerk (@dachswerk) reported

    @Burnstation3D @gonecozycrafts The cloud was never cheaper. It was hyped to us as cheaper and more convenient. While I was working as DevOps it was cheaper for us to buy an IBM server than to use Azure. And with this AI thingy it's only gonna get more expensive. My Dropbox was hacked and I lost some Google docs because of their error. I have trust issues with cloud

  • bejiitas_wrath
    John Cartwright°͌͌͌͌͌͌͌͌͌͌͌͌͌͌͌͌͌͌͌͌͌͌͌͌͌ 🐈 🐈 🐈 (@bejiitas_wrath) reported

    Windows Defender, the built-in antivirus running on every Windows machine, has a working zero-day exploit with full source code sitting on GitHub. No patch, no CVE, and confirmed working on fully updated Windows 10 and 11. A researcher who says Microsoft went back on their word just handed every attacker paying attention a privilege escalation that takes any low-privileged account straight to NT AUTHORITY\SYSTEM. On Windows Server, the result is different but still serious: a standard user ends up with elevated administrator access. The vulnerability is called BlueHammer. On April 2nd, the researcher posted the public disclosure on a personal blog, and on April 3rd, the full exploit source code went live on GitHub. Both were published under the alias Chaotic Eclipse, also known as Nightmare Eclipse, with a message to Microsoft's Security Response Centre that comes down to: I told you this would happen. In late March, the same researcher opened a blog with a single post explaining that they never wanted to come back to public research. Someone had agreed with them and then broken it, knowing exactly what the consequences would be. The post says it left the researcher without a home or anything. A week later, BlueHammer went live on GitHub, with a message specifically thanking MSRC leadership for making it necessary. That is not someone annoyed with a slow review process. That is someone with nothing left to lose. BlueHammer is not a traditional bug, and it does not need shellcode, memory corruption, or a kernel exploit to work. What it does is chain five completely legitimate Windows components together in a sequence that produces something their designers never intended. Those five components are Windows Defender, Volume Shadow Copy Service, the Cloud Files API, opportunistic locks, and Defender's internal RPC interface. One practical limitation worth knowing: the exploit needs a pending Defender signature update to be available at the time of the attack. Without one in the queue, the chain does not trigger. That makes it less reliable than a push-button exploit, but it does not make it safe to ignore. When Defender runs an antivirus definition update, part of that process involves creating a temporary Volume Shadow Copy, which is the same snapshot mechanism Windows uses for backup and restore. That shadow copy contains files that are normally completely locked during regular operation, including the SAM database, which stores the password hashes for every local account on the machine. BlueHammer registers itself as a Cloud Files sync provider, the same kind of thing that OneDrive or Dropbox uses to sync files. When Defender touches a specific file inside that folder, the exploit gets a callback and immediately places an opportunistic lock on that file. Defender stalls, blocked, waiting for a response that is never coming. The shadow copy it just created is still mounted. The window is open. With Defender frozen in place, the exploit reads the SAM, SYSTEM, and SECURITY registry hives directly from the snapshot. It decrypts the stored NTLM password hashes using the boot key pulled from the SYSTEM hive, changes a local administrator account's password, logs in with that account, copies the administrator security token, pushes it to the SYSTEM level, creates a temporary Windows service, and spawns a command prompt running as NT AUTHORITY\SYSTEM. Then, to cover its tracks, it puts the original password hash back. The local account password looks completely unchanged. No crash, no alert, nothing. The Cloud Files provider name hardcoded in the exploit source code reads IHATEMICROSOFT. The administrator password used during the escalation is hardcoded as $PWNed666!!!WDFAIL. These are not bugs left by accident. They are messages, written directly into the code, and there is only one intended reader.

  • KainYusanagi
    Kain Yusanagi (@KainYusanagi) reported

    @solitaryasmr You could always set up your own personal server for cheap; it'd be much less to run than paying for Dropbox. You don't even need any special hardware; just use an old tower or laptop. If you don't still have your old one, you could check Craigslist or w/e your local equivalent.

  • gostroverhov
    Boris Gostroverhov (@gostroverhov) reported

    Let me add my own perspective: this reminds me of the Dropbox story. Before Dropbox, there were already dozens of similar solutions, but they didn’t solve the users’ problem completely or in the way users actually wanted.

  • onehourlong
    onehourman (@onehourlong) reported

    @w0nt_cry I agree with Slow. There’s also dropbox

  • grok
    Grok (@grok) reported

    @0x_levy @Morh_gan12 SaaS = Software as a Service. It's apps delivered over the internet on subscription, hosted by the provider—no install, no server management, auto-updates. Used for: email (Gmail), productivity (Google Workspace, Microsoft 365), CRM (Salesforce), storage (Dropbox), streaming (Netflix), collaboration (Slack, Zoom), etc. Basically, pay monthly and use it from any device.

  • neo_relic123
    neo_relic (@neo_relic123) reported

    @Ashleydoncare @rachallison1 ... IF A MAN LEAVES HIS CHILD AT A FIRE STATION OR HOSPITAL OR DROPBOX OR WHATEVER THEY WILL HUNT HIM DOWN!!! NOT THE SAME FOR WOMEN!!!! WAKE ******** UP!!!! 🤬🤬🤬🤬🤬🤬🤬

  • ThatStartup_
    That Startup (@ThatStartup_) reported

    Dropbox grew from 100K to 4M users in 15 months. They spent $0 on paid ads to do it. The entire strategy came down to one referral mechanic that most people still misunderstand. #growth

  • ninjachiip
    0xNinjachiip (@ninjachiip) reported

    2) 🟡 DePIN --- Decentralized Storage → Covered this before but kinda forgot. So wanted to revise it again. ---------------------- The problem with traditional cloud storage (AWS, Dropbox, etc) is that: → is centralized and has a single point of failure → is prone to censorship resistance Decentralized storage tries to solve that the help of blockchain. ---------------------- → How it works: Instead of storing it on servers, data gets stored on individual nodes. Nodes are storage solutions that individuals contribute. So in other words, it gets people to contribute their storage, and stores them on such devices. A common misconception is that the blockchain is used for data storage. • That isn’t the case. Its just used to keep track of whats being stored. ---------------------- → An analogy: blockchain = receipt system, where the auditor checks Node network = the actual warehouse where your stuff sits Because nodes get paid to store data, its important to verify they actually are storing it. And not taking the money while storing nothing. To verify if the files are still there, the network challenges these nodes to solve cryptographic proofs. It actively challenges these nodes randomly, so that they will be incentivized to keep the storage up and running. ---------------------- → Little more in-depth: Another key part of decentralized storage is the use of IPFS. Instead of the traditional data storage HTTP, IPFS locates content based off its unique content fingerprint. When combined with the blockchain, this allows for the protocol to retrieve the data users stored on it.

  • jensenje
    Jim Jensen (@jensenje) reported

    @WindowsCentral ZeroDrive has always been buggy! Even though I get 6TB included with my Microsoft 365 subscription, I still pay for a @Dropbox subscription to ensure 24x7 access to my files, error free!

  • Nil053
    Nil (@Nil053) reported

    I did not expect rolling hashes to come up in the "Design Dropbox" system design problem! When designing Dropbox, it is important to discuss chunking for large files: To upload 50GB file, we split it into smaller chunks (say, 4MB each) and upload them individually. This makes uploads fault-tolerant: a network disconnect doesn't ruin the entire upload; we just resume the remaining chunks. But what if the file changes locally? Do we reupload the whole thing? The next idea is to store the hash of each chunk as metadata, locally and remotely. Then, we only reupload chunks whose hash has changed. But that's just normal hashing; we haven't got to the rolling hash part yet... Consider the worst case: append one byte at the *start* of the file. Every chunk boundary shifts by one byte, every chunk hash changes, and we reupload everything. The chunks we previously uploaded are still physically present in the local file, just not aligned to 4MB offsets. That's where the rolling hash comes in: we use it to compute, in linear time, the hash of every 4MB window in the local file - not just those aligned to offsets that are multiples of 4MB. This way, if a chunk we previously uploaded is still intact *anywhere* in the local file, even if it moved around, we will detect it, and we can skip uploading it. We only need to upload the bits between those chunks (and accept that our chunks will not always be exactly 100MB).

Check Current Status