Amazon Web Services Outage Map
The map below depicts the most recent cities worldwide where Amazon Web Services users have reported problems and outages. If you are having an issue with Amazon Web Services, make sure to submit a report below
The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.
Amazon Web Services users affected:
Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".
Most Affected Locations
Outage reports and issues in the past 15 days originated from:
| Location | Reports |
|---|---|
| Birmingham, England | 2 |
| San Jose, CA | 2 |
| Fortín de las Flores, VER | 1 |
| Seneca Falls, NY | 1 |
| Canby, OR | 1 |
| Los Angeles, CA | 1 |
| Greater Noida, UP | 1 |
| Hamburg, HH | 1 |
| Uniontown, PA | 1 |
| New York City, NY | 1 |
| Cali, Departamento del Valle del Cauca | 1 |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
Amazon Web Services Issues Reports
Latest outage, problems and issue reports in social media:
-
Ramon Gainez
(@RamonGainez) reported
@cloudpundit @awscloud Kinesis has a central dependency somewhere in us-east-1. My us-east-2 pipelines had elevated error rates almost the whole day. And yes, r53 was down but DNS capabilities across AWS were effected
-
RJ
(@DickJim3) reported
@QuinnyPig @awscloud I humbly disagree. Organizations need to ensure redundancy based on their SLA requirements. Hospitals may need 100% redundancy. Others, maybe not even close to that. IT needs to plan properly. That’s where the failure happened. Every service at some point will go down. Plan 4 it
-
Martin Pinnau
(@martinpinnau) reported
I heard some washing machines were affected by the Amazon AWS outage. Why would a washing machine need internet connectivity? Someone has to be there to load & unload it, so you can’t really run it remotely. What data & connectivity would it need?
-
Matt Brawley
(@mbrawley1) reported
@PeterDiamandis Not if @awscloud has an outage and my brain stops working
-
Geeky Tech - Ooo Presents! 🚀🎄🎁🕹️✌️♥️⚛️
(@xucaen) reported
@AWSSupport The link you posted does not take me to an article pertaining to this issue. It says something about light sail and blocks.
-
Lars Marowsky-Brée
(@larsmb) reported
@hikhvar @awscloud Yeah, that makes sense; most scenarios have pretty 1:1 relations, so that's not too terrible.
-
John 💀
(@iamnottomgreen) reported
@awscloud Fix your damn severs.
-
Stef with an F - NO LISTS👩🏾🍳🔪🔥☁️🏳️🌈 ♏️
(@stef2dotoh) reported
The purchase experience was riddled with errors and frozen screens. @Amazon @awscloud is not ready for prime time. It couldn't handle the traffic.
-
Kirk Kelly
(@KirkKelly) reported
Thanks @awscloud @JeffBezos lost out on a rush audition due to aws being down and I had a great feeling about that one. I could scream right..........
-
Lars Marowsky-Brée
(@larsmb) reported
@hikhvar @awscloud Yes. Reconciling data after partitions is hard, but feasible in most cases. (That said, so would be a server-side DR architecture that assumed a cooperative client. Ah well.)
-
KRIS
(@KRIS2175) reported
All shade to @Ticketmaster & @awscloud for causing even the registered @Adele fans to have their hearts broken. A code is no good if you send conflicting texts and email notifications with wrong time slot data.We miss her and y’all aint helpin #Adele #adeletickets #ticketmaster
-
Eoin Jennings
(@eoin_jennings) reported
@CTOAdvisor @awscloud You have to tolerate risk or you can’t push feature - the problem is the systematic risk
-
Ramon Gainez
(@RamonGainez) reported
@cloudpundit @awscloud In the real world this didn’t happen though. Far too many AWS managed services have central dependencies on us-east-1. Who cares if I can spin up ec2 instances in another az if kinesis is down and my RDS failover died because amazons dns isn’t resilient to AZ failure
-
blockparty_sh
(@BlockpartySh) reported
@awscloud sucks and is run by the NSA -> criminal Keith Alexander on Amazon's board. They just shut down a ton of smartBCH servers (not mine) without warning + deleted disks. I recommend all companies in Bitcoin Cash refuse to use them for anything.
-
Lydia Leong
(@cloudpundit) reported
In large part what's interesting is what *wasn't* affected by the @awscloud us-east-1 issues: Other regions. Despite the immense size of us-east-1, other regions were able to take the increased load from customers who were multi-region or did regional failover, AFAIK?