Lightning Strikes Amazon's Cloud (Really) 109
The Register has details on a recent EC2 outage that is being blamed on a lightning strike that zapped a power distribution unit of the data center. The interruption only lasted around 6 hours, but the irony should last much longer. "While Amazon was correcting the problem, it told customers they had the option of launching new server instances to replace those that went down. But customers were also able to wait for their original instances to come back up after power was restored to the hardware in question."
Irony? (Score:5, Insightful)
Re:Irony? (Score:5, Funny)
It evidently did (Score:2, Redundant)
If i'm not mistaken then the whole point of a cloud is that you spread your processing around different hardware (in different geographies) and so that no part failing constitutes a total failure. Only one of Amazon's two zones went down so a well designed cloud app shouldn't have failed.
Re: (Score:3, Informative)
If you want to guarantee data integrity and consistent data between your instances, then you cannot tolerate one out of two going down. Byzantine agreement protocols can tolerate less than one third failures, so you would actually need four to tolerate one failure.
Re: (Score:2)
This failure was fail-stop, not Byzantine.
Re: (Score:1)
Customers shouldn't really need to run their own byzantine agreement system thou
Re: (Score:3, Informative)
Only one of Amazon's two zones went down
There are two regions (US and EU) each with several availability zones (US currently has four.) The AZ's are designed to be isolated from one another. This outage affected one AZ in the US region.
If you are doing load balancing across instances in multiple AZ's (or even using Amazon's own Elastic Load Balancing and Auto-Scaling features) you would have been fine, since this is exactly the kind of problem they're designed to handle.
Re: (Score:1)
Amazon's definition of a "cloud" is a whole bunch of XEN-based VPSes running in less than a handful of data centers here and there.
Resilience is an exercise left to the customer.
Re: (Score:2)
Cloud computing promises to do none of this. What it is is datacenter provisioning akin to the mainframe days of old, except you get to "mostly" choose the platform you want to run on, Wi
Re: (Score:2)
Well, get a bigger laptop then! Or just move your user profile.
Re:Irony? (Score:4, Insightful)
The irony is that a cloud was struck by lightning. Lightning usually comes from clouds.
Sometimes we all need to tone back the nerd a bit :)
Re: (Score:3, Funny)
Sometimes we all need to tone back the nerd a bit :)
What? Get out. *points*
Re:Irony? (Score:4, Funny)
Well as it just so happens most lightning is ground to cloud or cloud to cloud with very little cloud to ground.
My nerdiness goes up to 11 by the way.
Re: (Score:2)
There's a "in soviet amazon" joke in there somewhere. I know it!
Struck the cloud, eh? (Score:2, Funny)
Did it leave a silver lining?
Who covers the cost? (Score:2, Interesting)
Naive question: Are data centers usually insured for the cost of hardware replacement and/or loss of revenue in a situation like this?
Re: (Score:1)
Re: (Score:3, Insightful)
Who covers the cost then?
Re: (Score:2)
Well, now that that's over with.... (Score:2, Funny)
There's nothing to worry about, because as we all know, Lightning never strikes twice.
Yay for savings on the surge protectors!
Re: (Score:3, Funny)
It's only that lightning never strikes only twice. There's nothing stopping you from getting hit more than twice
http://en.wikipedia.org/wiki/Roy_Sullivan [wikipedia.org]
Re: (Score:1)
Also, lightning rods on tall buildings can get hit hundreds of times annually.
Lightning once striked our office building. (Score:3, Interesting)
.
I have to wonder if those who are critical of Amazon here have ever experienced a direct lightning strike? I doubt it.
Re:Lightning once striked our office building. (Score:5, Insightful)
So what's the deal with having all copies of these VMs in one datacenter? That's not very The Cloud of them. Maybe they should replicate all of EC2 to GFS. Would The Cloud win then?
Customers being given the option of redeploying their VMs or waiting an unspecified period of time until The Cloud is back online isn't The Cloud we were promised.
Re:Lightning once striked our office building. (Score:4, Insightful)
I'm thinking critically because Amazon, EMC, VMWare, etc bill The Cloud as a mystical place where you throw your shit and then it's universally available 100%. Nothing bad happens in The Cloud.
No, they don't. You're either being disingenuous, or idiotic.
So what's the deal with having all copies of these VMs in one datacenter? That's not very The Cloud of them.
So you expect Amazon to somehow be running the same VM simultaneously on multiple machines? The point of EC2 is that you have machine images prepared in advance, which you can launch at any time to instantiate a new, ready-to-go VM. The VMs themselves are obviously still running on actual machines, which are (surprise!) still vulnerable to things like lightning strikes and other random hardware failures.
If a few minutes downtime when something like that happens is unacceptable, then you should be running multiple machines in different availability zones-- which is exactly what you'd be doing in a more traditional environment. EC2 just makes it easier to do this in a flexible way. Yes, you pay for that privilege, but it's clearly worth it to some people.
Re:Lightning once striked our office building. (Score:5, Informative)
"Amazon EC2 provides developers the tools to build failure resilient applications and isolate themselves from failure scenarios."
"you can protect your applications from failure of a single location"
Re: (Score:2)
Amazon EC2 provides developers the tools to build failure resilient applications and isolate themselves from failure scenarios.
you can protect your applications from failure of a single location
If you look closely, you may be able to discern a difference between the previous two statements and the following:
...The Cloud [is] a mystical place where you throw your shit and then it's universally available 100%. Nothing bad happens in The Cloud.
Can you guess which statement was not made by Amazon?
Re: (Score:1, Insightful)
Endless arguing. Did or didn't amazon say that using the cloud you "protect your application from failure of a single location"? And did or didn't this happen? Answering the two question in the right order will explain what the OP meant even to you.
Re: (Score:2)
The failure described by the article affected one availability zone out of seven in the EC2 cloud. Anybody who built their application redundantly across multiple zones would not have been affected by the outage.
Re: (Score:2)
You still have to design your apps to be distributed in some manner. You can't just throw your single process server code onto the cloud and expect it to be failure resistant without giving it any thought. You have to purchase extra capacity and decide which locations they should reside in. Any user purchasing 1 o
Re:Lightning once striked our office building. (Score:4, Insightful)
"Amazon EC2 provides developers the tools to build failure resilient applications and isolate themselves from failure scenarios."
Let's highlight the words that needs emphasis.
"provides", "developers", "tools"
As to whether the developers use them or not isn't always automatic.
"you can protect your applications from failure of a single location"
"can"
Highly available does not meant fault tolerance. The latter allows an application to continue functioning after a component failure. Regardless of the snake oil that has been thrown around, there is no silver bullet that can automagically enable application to be multi-node aware with no chance of deadlock or data corruption. You need to program for this. Again, tools are provided, but that doesn't mean everyone will use them. So in the absense of a fault tolerant application, the cloud provides high availability.
Re: (Score:2)
Regardless of the snake oil that has been thrown around, there is no silver bullet that can automagically enable application to be multi-node aware with no chance of deadlock or data corruption.
It's not a silver bullet, but you can give the same input to two [virtual] machines and if one fails the traffic is picked up by the other one. It does however provide pretty linear redundancy... Potentially at the cost of some latency.
Re: (Score:3, Interesting)
I'm thinking critically because Amazon, EMC, VMWare, etc bill The Cloud as a mystical place where you throw your shit and then it's universally available 100%. Nothing bad happens in The Cloud.
No, they don't. You're either being disingenuous, or idiotic.
Per http://aws.amazon.com/ec2/#highlights [amazon.com], Amazon is promising "Reliable Amazon EC2 offers a highly reliable environment where replacement instances can be rapidly and predictably commissioned. The service runs within Amazons proven network infrastructure and datacenters. The Amazon EC2 Service Level Agreement commitment is 99.95% availability for each Amazon EC2 Region.
The irony here is that 6 hours in a year is 99.93% so they've already blown it for the year.
So what's the deal with having all copies of these VMs in one datacenter? That's not very The Cloud of them.
If it's only one instance runn
Re: (Score:3, Insightful)
A region consists of multiple datacenters. 99.93% would be for 1 datacenter, not the region.
Re: (Score:1)
So I guess you're saying that EC2 isn't a cloud.
---
HTML isn't what it's marked up to be.
Re: (Score:2)
The whole point being that you pay only for the resilience YOU want, not for a bunch of things that may or may not be appropriate depending on your app. Amazon can't know whether bringing an image up is safe or not
Re: (Score:2)
I'm thinking critically because Amazon, EMC, VMWare, etc bill The Cloud as a mystical place where you throw your shit and then it's universally available 100%. Nothing bad happens in The Cloud. No, they don't. You're either being disingenuous, or idiotic.
His irony went entirely over your head. Look at the word I rendered in bold. Get the irony now? Duh.
Re: (Score:2)
I choooooose....2:) Disingenuous FTW Alex. I was also being 3:) drunk and snarky, and annoyed with EMC and VMWare spinning cloud computing as "fault tolerant" computing somehow.
The sales pitch of The Cloud is that, and yes I've heard this, you can move VMs from one physical location to another with no downtime. I fail to see how that pitch works in terms of IP subnets, which must be different for the networks to work, but there you have it
Re: (Score:2)
The sales pitch of The Cloud is that, and yes I've heard this, you can move VMs from one physical location to another with no downtime.
I'd be interested to know where you heard this. I don't recall Amazon ever making such claims (yes, I know you also mentioned EMC and VMWare in your original post, but this story is about Amazon after all).
Re: (Score:2)
Re: (Score:2)
I suspect by "move" they mean "copy and re-ip" and by no downtime, they mean "ecxept for DNS change propagation time", but I'm no VM/Cloud Computing expert yet. I'm not saying it can't happen, but I really need that part explained to me, and no VMWare or EMC people have been able to do so adequately yet.
You do not have to deal with DNS change propagation. You have 2 choices here, you can use elastic IP addresses which are permanent and can be assigned to any instance you want:
Instance A goes down.
You bring up instance B and assign the IP that was on instance A to B.
Or you can use Elastic Load Balancing which gives you a public CNAME that you can use to load balance across instances. The ELB is itself fault tolerant and can exist in multiple availability zones.
The ELB can also be configured to automatically
Lightning once striked my friends house. (Score:1, Interesting)
Re:Lightning once striked my friends house. (Score:5, Funny)
Re: (Score:3, Interesting)
Three days is lucky. My very first job (many, many moons ago) was at a company which had a few 5, 10 and 15 meter SATCOM dishes outside. One fall night, a set of severe T-storms rolled through around 2 am, and lightning struck the SAT farm. Nearly knocked me out of my NOC chair where I was fighting to stay awake, and I swore something big had exploded outside.
Turns out, one of the SAT dishes had not been properly grounded, and the current surged through the SATNET into our internal networks. Several mid
Re:Lightning once striked our office building. (Score:5, Interesting)
Just so people know, this can be a real bitch.
I took a direct lightning strike at one site I work with that entered the corner of the building, traveled down the inside wall leaving a scorch mark on two levels and into the basement where all the servers and switches were located. The lightening then traveled through the electrical service main lines to an encased transformer located in the parking lot next door causing it to explode with enough force that is shattered the windows of the bank building next door and a door panel was found on a rood about a block away. It appears that one half of the electrical system was grounded properly through a specific ground rod and the other half was tied into the plumbing that ran inches away from the lightning rod grounds. When they purchased the building, they didn't redo all the electrical on the side of the building that wasn't remodeled and that way of grounding was normal.
We lost 3 of the 5 servers instantly and couldn't keep the other two stable. Both switches were down, 20 of the 44 workstations along with the tape backup machine, copiers, and networked printers were completely dead when we got there. The entire building had a lightning/surge protector with battery backup and natural gas generator on the mains so they weren't too concerned over in house specific protections. Only the systems with UPS on them directly survived with the exceptions of the servers which I'm not sure if they died from the lightning strike or from getting soaked by the fire sprinklers that was set off by the strike. (surprisingly, there was no fire).
It took us two days at almost 20 hours a day among 5 people with a lot of borrowing from other sites, about 20 trips to five or six computer stores in the surrounding counties, and a generator to come back on line and be operational again. We even had a make shift phone system in place while waiting on a new Avaya to come in. We did this all before the electric company got the transformer replaced and service back on. Until we replaced the other machines that were thought not to be effected, we experienced all sorts of weird behavior on the network and I'm still not confident with the cabling even though it passed the testing. [idealindustries.com] Of course I didn't run the certification so it might just be me not trusting others.
If you get a direct strike, you might as well count on replacing everything in a production environment. When I say direct strike, I mean evidence it actually hit the building and not something down the road and traveled to the building. It will be easier and cheaper in the long run. Now, I have as part of the catastrophe plan, a means to replace every computer and component on the network at one time just to be safe. If it wasn't for two other sites having the same tape drives, we would have had to wait a week for a replacement to come in and start the data recovery process. Thank god for off-site tape storage.
Re: (Score:2)
[Lightning once striked{sic} our office building.] Our computer room was down for three days as a result. Amazon's six hour downtime looks like a big improvement.
Never had a lightning strike, but last year the building transformer that feeds our data center Fucking Exploded (I was on the other side of the building, and I tell you the earth moved.) No injuries, since it's shielded from the building by a retaining wall. Backup power (UPS, generator) went totally dark about 30 minutes later, which should never happen, but it was an odd day.
We were down for about 12 hours. And we're a University, not a Fortune-100. Massive electrical repair can happen quickly if you hav
Re: (Score:2)
Getting one transformer isn't too hard if you are willing to pay for it, unless perhaps there's been a recent massive solar flare that's burned out equipment across half of your state or something.
God here... (Score:5, Funny)
Is the message clear?
-RMS
Re: (Score:2)
I guess that coming Monday morning the discussion at Amazon's boardroom table will be more along the lines of "Devil here....". :-)
Re: (Score:2)
I hear he's in the details, so look closely
What irony? (Score:4, Insightful)
What irony?
Maybe I'm just tired, but I'm not sure what irony is being referred to by the poster.
Re: (Score:1)
Lightning killed the 'cloud'.
It's not great irony, but it's kinda there.
Re:What irony? (Score:5, Funny)
Re:What irony? (Score:5, Funny)
Regular irony is not wearing your tin foil hat on the one day someone actually does beam thoughts into your brain.
Nope. You've still got it wrong... That's still Morissette irony.
Re:What irony? (Score:5, Funny)
The real irony here is that tinfoil hats are actually required in order to beam thoughts into your head...
Re: (Score:2)
How about, "cloud computing on a sunny day only".
irony FTW!
Re: (Score:2)
Popular irony is like getting a fly in your white wine.
Actually that would be, "It's a black fly in your Chardonnay...
It's a death row pardon two minutes too late
And isn't it ironic... don't you think"
Re: (Score:2)
Regular irony is what happens to your drinking water when the junk yard dumps its old car bodies in the reservoir.
Or when a robot reads the definition of "irony" from the OED during a one-off production of the greatest opera ever.
Magnetite suspended in oil is pretty irony, too.
Your tin foil hat example is just plain, old, ordinary unfortunate coincidence. a.k.a alanian irony. Calling it "tin foil" when it's actually aluminum, however...
BTW, the the white wine thing actually is irony (well, fairly loosely)
Re: (Score:1)
Re:What irony? (Score:5, Funny)
In Soviet Russia, clouds get hit by lightning?
Yeah, it's sorta weak, but that's what they were going for.
Re: (Score:1)
In soviet russia clouds hit you
seeing as here the clouds are what get hit...
Re: (Score:2, Insightful)
That a computing technology that was supposed to be largely immune to damage of individual "nodes" in the cloud could be taken down by lightning hitting a single point?
Re:What irony? (Score:5, Insightful)
Perhaps they were referring to the irony of Amazon's EC2 being affected by one of the very natural disasters it advertises protection against.
Its rather like an "unsinkable" vessel going down on her maiden voyage.
Apropos, sure. Irony, nah (Score:1, Redundant)
Unless by Irony, you mean "like rain on your wedding day"
Inconcievable! (Score:5, Insightful)
While everyone is talking up the cloud and how resilient it is... this is just yet another example to never put all your eggs in one basket. If your service is so damn important that it can't go down - have it hosted in two places.
Notice, Amazon.com didn't go down... :)
Re: (Score:2, Interesting)
I don't see how cloud hosting is somehow incompatible with hosting in two places.
Probability (Score:1)
Re: (Score:1)
Re:Inconcievable! (Score:5, Informative)
Well it does seem like it was pretty resilient:
While Amazon was correcting the problem, it told customers they had the option of launching new server instances to replace those that went down.
So basically a set of servers went down, and it took down the particular instances running on those servers. Customers were still able to take the same exact image and start new instances-- it sounds like immediately. Now sure, it'd be nice if they worked out some kind of automatic clustering and failover to take care of this sort of thing for you, but when my server goes down with my dedicated host, I don't have the option to start up a new host immediately with the same exact configuration.
Re: (Score:2)
So basically a set of servers went down, and it took down the particular instances running on those servers. Customers were still able to take the same exact image and start new instances-- it sounds like immediately. Now sure, it'd be nice if they worked out some kind of automatic clustering and failover to take care of this sort of thing for you, but when my server goes down with my dedicated host, I don't have the option to start up a new host immediately with the same exact configuration.
I don't think I read this the same way you did. From my reading, customers could fire up a new server instance, but I doubt it had their same data. Sure, the base OS configuration was the same - but same data, I don't think so.
From the article:
While Amazon was correcting the problem, it told customers they had the option of launching new server instances to replace those that went down. But customers were also able to wait for their original instances to come back up after power was restored to the hardware in question.
Re: (Score:3, Informative)
EC2 instances don't contain instance data. The GP is correct. State data is generally stored on S3, on shared storage, or using their db interface.
Re: (Score:2)
You can make the "cloud" resilient, redundant, and highly available. They obviously did not. If the cloud is extended to two places then you don't need servers (virtual or otherwise) in two places.
Do any of you know how they survived? (Score:5, Interesting)
**typo** should be: is NOT written out (Score:3, Informative)
Sorry about that.
Re: (Score:3, Informative)
Re:Do any of you know how they survived? (Score:5, Insightful)
I'm reading between the lines here (it doesn't actually say this in TFA), but it sounds like this was a direct hit. Not an outage, which is a different beast.
A UPS is about as useful in this instance as antibiotics against a virus - it's a solution to a different problem. Surge protectors don't help much either, not unless the strike was a fairly mild and/or remote one. You could switch over to a disconnected UPS system every time there's a thunderstorm on the horizon, but that seems needlessly complicated and expensive.
That being said, the GP referred to an outage, so you've quite correctly answered his question; it's just the wrong question to ask in this instance. And of course I could be misreading (or Amazon could be misrepresenting) the exact nature of the failure - if it were a regular outage, none of the above would apply.
Re: (Score:3, Insightful)
You could switch over to a disconnected UPS system every time there's a thunderstorm on the horizon, but that seems needlessly complicated and expensive.
Actually, that's NOT a bad idea at all. If you used fiber to the rack and you had big ugly relays that would open the connections, it might be a useful strategy in lightning country. It shouldn't be too hard to detect when lightning is striking nearby, and open the contacts. You would definitely need to do it per-rack at minimum though, because having a battery in every system is an ecological nightmare.
Re: (Score:2)
Re:Do any of you know how they survived? (Score:4, Informative)
Typically, the raid controller will have enough on board capacity to clear it's write cache before losing power entirely. While the drive array will be connected to a decent UPS that can hold for at least a few minutes. Meanwhile, the server itself will also likely be connected to the same UPS, or a different one.
The real question at hand is, were the UPS between the power distribution node and the server, or were they on the other side of the distribution node, and therefore worthless in a case like this? I've seen both configurations, but the latter is rarer. Not because of this particular case, but because of efficiency concerns.
If there was a failure of design, it was most likely in the building wiring itself. The building was clearly not properly grounded against lightning strikes, as if it was, the surge would never have hit the internal wiring. It might have kicked the building off the grid for a time, but it should never have reached a power distribution node. Although it's likely the outcome would be similar if not identical.
Re: (Score:2)
I thought the point was software fault tolerance so that the hardware is cheap and lacks the fancy features you describe.
Re: (Score:2)
This is one instance where you can have a system that's cheap, redundant or sophisticated, pick 2. Cloud computing is the cheap, redundant option, in which case they may have cut corners on eventualities like lightning strikes.
I'm more curious as to why the servers were centralized enough to be vulnerable to this. Kinda defeats the purpose of redundancy, no? OTOH, it does sound like they had enough backups in place to get everything up and running again in short order, so maybe it's unfair to second-gues
Re: (Score:2)
I'm more curious as to why the servers were centralized enough to be vulnerable to this. Kinda defeats the purpose of redundancy, no? OTOH, it does sound like they had enough backups in place to get everything up and running again in short order, so maybe it's unfair to second-guess them.
Because with Amazon, if you really care about being resilient you need put your instances in more than one "availability zone" (i.e., datacenter). That's how they do it, they're open about this being the case, and there's really no magic, just competent hosting.
Re: (Score:2)
In the article I got the impression that they booted things back up and their apps started running again~ my bad if I mis-read.
Thanks everyone for the good information! I should have realized that the solution was simpler - and probability of my error greater.
cheers.
Re:Do any of you know how they survived? (Score:5, Informative)
Re: (Score:2)
Re: (Score:2)
Having taken weather and climate 101... (Score:3, Funny)
This is clearly a case of cloud-to-cloud lightning.
speaking of lightning and electronics. (Score:3, Interesting)
I don't remember the final resolution of the problem, but I do remember that from the 2nd strike until the problem was solved, every time I heard thunder I would run to the English building and with my newly assigned key, run upstairs and disconnect the rj-21 fanout cables. I would then leave a note on the English dept office informing them that they'd need to plug them in the in a.m. One evening, I didn't make it. I heard thunder and bolted for the English dept... I had my key in the buildings' outside door when lightning struck the building...and I knew I was too late. When I got upstairs, I could smell burnt electronics....
Probably at the same time as this was going on for me, my dad, who was a large-scale CSE had similar problems. I don't know how much 16-port line-cards for the system that he was supporting cost, but one day he had to replace eight or nine of them. The next day, UPS delivered two cases of copper-fiber-copper serial surge suppressors and he scheduled to install them. I don't think that site had problems after that.