BSOD Makes Appearance at Olympic Opening Ceremonies 521
Whiteox writes "A BSOD was projected onto the roof of the National Stadium during the grand finale to the four-hour spectacular at the Olympics. Lenovo chairman Yang Yuanqing chose to go with XP instead of Vista because of the complexity of the IT functions at the Games. His comment on Vista? 'If it's not stable, it could have some problems,' he said. Evidently Bill Gates attended the opening ceremony, so he must have witnessed it."
Re:well (Score:5, Insightful)
In fairness to Microsoft, blue screens are normally due to bad hardware drivers. Whatever that thing actually was, it certainly wasn't a normal monitor and I'll bet the drivers are rather specific. And the less people use them, the fewer bugs are found.
Cheers,
Ian
Eh, so what? (Score:5, Insightful)
All computers crash - I've made Linux, BSD, OSX, and Solaris machines kernel panic. Hell, I've witnessed a newer zSeries mainframe crash.
The fact that it happened at an inopportune moment is unfortunate, but that's life.
... Eh, so what? ... (Score:3, Insightful)
Re:well (Score:5, Insightful)
Be realistic for a second please, you think on show as grand as the opening ceremonies only had one glitch? Seriously?
There is no such thing as a show this big without multiple (read a lot) of glitches. They are covered up well, quickly fixed, or not noticed, but they are there. This one was just in the open for everyone to see.
In fairness to software engineering (Score:1, Insightful)
In fairness to Microsoft, blue screens are normally due to bad hardware drivers. Whatever that thing actually was, it certainly wasn't a normal monitor and I'll bet the drivers are rather specific. And the less people use them, the fewer bugs are found.
Cheers,
Ian
Jeez. MS apologists always trot out that one. Making bad engineering acceptable will probably be Bill Gates [amazon.com]' largest "contribution" to society.
In fairness to software engineering, if the "bad" hardware driver can crash the system, then the system is not ready for production and has more than a few show-stopping (no pun intended) bugs. Take a look at basic kernel or micro-kernel design principles and stop spreading the view that catastrophically bad design is acceptable.
BSOD? Big deal! (Score:5, Insightful)
Re:Oh, stop it! (Score:3, Insightful)
making bad engineering acceptable (Score:4, Insightful)
...It's not uncommon to get a BSOD from time to time.
And unless you do something about it, like vote with your wallet, you are simply helping Bill and his minions make bad engineering acceptable.
Re:... Eh, so what? ... (Score:3, Insightful)
I bet the guy in charge and the Chinese government don't see it your way.
Glitch happens, but for ceremonies like this one, this isn't a little glitch. If people notice, it's bad, specially if you're trying to impress people.
Re:well (Score:1, Insightful)
err, it's quite unlikely the RoC government will punish anyone for mishaps at the Beijing Games...
Re:In fairness to software engineering (Score:5, Insightful)
Jeez. MS apologists always trot out that one. Making bad engineering acceptable will probably be Bill Gates [amazon.com]' largest "contribution" to society.
In fairness to software engineering, if the "bad" hardware driver can crash the system, then the system is not ready for production and has more than a few show-stopping (no pun intended) bugs. Take a look at basic kernel or micro-kernel design principles and stop spreading the view that catastrophically bad design is acceptable.
Linux puts most drivers in the kernel and a bad driver there can cause a panic, bringing the system down.
Most of the BSDs, AFAIK, have some drivers in the kernel and others in userland processes.
I'm not sure how it's architected in Mac OS X, but I've certainly seen kernel panics on my Mac Mini.
There may be an embedded OS which is less susceptible to being killed by a poor driver, but for something like this you probably wouldn't bother with an embedded OS because there's so much more in the way of off-the-shelf software available to do the job for Windows and Linux.
Re:In fairness to software engineering (Score:5, Insightful)
I'm sorry, do you know of an operating system where talking to hardware cannot cause a panic? Even microkernels such as Mach are prone to these problems. ANY time you touch hardware there can be a problem if it's coded wrong. Even microkernels have to allow DMA for certain hardware, and bad DMA can bring down a whole system without even trying. There's a basic design flaw in how normal computers operate that requires this sort of behavior from kernels, which leads to bad drivers affecting them. If you can name one system ready for general purpose for which this isn't true I would love to hear about it.
What's their motivation.... (Score:3, Insightful)
What's the motivation to write better hardware drivers if any time the system blue screens, people will just blame the OS anyway?
Re:In fairness to software engineering (Score:4, Insightful)
Jeez. MS apologists always trot out that one.
No, people who are reasonable and levelheaded always trot out that one.
Re:well (Score:3, Insightful)
Bartscht's Law of Model Railroading:
The number of problems is directly proportional to the number of spectators.
Re:May not be the case as much any more. (Score:3, Insightful)
The problem isn't the hardware, it's the drivers. I know at least some root kits will install themselves as a driver in order to get at the kernel's internals.
Re:BSOD? Big deal! (Score:1, Insightful)
No, this is exactly the kind of crap that Vista was meant to prevent. For example, Vista can recover from certain types of driver failure, most notably video card. And the fact that drivers must be signed in 64-bit Vista was meant to address the fact that lots of companies write bad drivers -- MS runs some tests on your driver to see if it is good enough, so that you don't go BSODing people's machines and having your customers say it's Microsoft's fault.
If the guy from Lenovo wasn't so dogmatic about XP's alleged "superiority" (read: "explorer.exe is more responsive, therefore the entire system is better") that everyone seems to take as gospel, maybe he'd have a more stable system.
Re:In fairness to software engineering (Score:2, Insightful)
Of course, if I count the times I've forced Windows to crash using the CrashOnCtrlScroll trick [com.com] for fun...
Re:In fairness to software engineering (Score:4, Insightful)
I can count on the fingers of one hand the number of kernel panics I've seen on either my Linux box or my Mac Mini. My Windows machines however.....
I've actually had my Macbook Pro freeze more times in the last year than my Windows machine. In fact, it even hung once when I closed the lid and tried to fry itself with the backlight. That's funny about this is I've had the Macbook for about 4 months, whereas I've had the Windows machine all year.
I promise you this is a true story. Your mileage may vary, even if you're a Mac user.
Re:well (Score:2, Insightful)
Could be, but I recall at some point the default was changed to reboot... maybe with XP SP2? It had to be changed because every newbie I help with endless reboot problems always has reboot checked and they never even heard of that setting.
What about Red Flag Linux? (Score:3, Insightful)
They could've used Red Flag Linux for free. Was it not up to the task, period?
Re:In fairness to software engineering (Score:4, Insightful)
Wrong. WRONG.
Yes, Linux (as a specific example) uses drivers directly in kernel mode. HOWEVER, those drivers are PART of the OS, distributed and supported WITH the OS, and are Open Source, along with the rest of the kernel. Redhat supports the whole thing.
If drivers are to be supplied "in kernel" this is REQUIRED for reliability. Take Solaris as an example. Source is supplied, along with a DDI layer.
If drivers are supported ONLY via a "DDK" (driver development kit), there must be an isolation between that part of the kernel that CANNOT be understood by the driver developer, and the driver. This was the primary issue with "unreliable" display drivers in the Windows 3.x days -- functionality MUST be implemented, but the reference was not documented, or incorrect.
Indeed, a lot of vendors took extreme steps to deal with this issue -- permanent staff at Microsoft, or (illegally) reverse engineering the support code (GDI).
Unfortunately, the promoted Windows driver development path is "Believe in the DDK, and go" without reference source. Of course, this IS prone to failure -- finally recognized in Vista. (but obvious to vendors since Windows 3.x).
The solution here? Go to a micro-kernel OS. Or, plant parts of device drivers into standard protected mode (user space). Both of which cause performance issues. Or keep part of your software team in Redmond.
Also, given that the interface and driving layer (what I would call a "driver") is under Microsoft's control, the test suites must come from Microsoft as well. If a "crash path" is then NOT exercised, that is ALSO Microsoft's problem. There should be no way for a higher level application to utilize anything OTHER than a tested path to the driver. If it can, the testing is useless, and "Microsoft Certification" is useless.
An analogy at the application layer - SUN has the "application guarantee". That consisted of a series of tools that collected API usage (and could be run by the customer). If an application passed, and then a later upgrade of Solaris BREAKS the application, it is SUN's problem. (SUN fixes the OS or Application).
Everyone is missing the point (Score:3, Insightful)
It doesn't matter if using Vista would have cost twice as much, taken three times as long to set up and resulted in four times as many errors during the opening ceremony. What people saw fail was XP, and that's what Microsoft will stress.
Re:well (Score:1, Insightful)
FYI: the scheduler is part of the OS kernel which decides which process/thread to run next. Good luck porting that to Windows. :-)
The point is, the idea that it's a "design flaw" that NTFS might leave the disk in a bad state after failure is mistaken. This fact is a fundemental truth: if you lose buffered writes, the disk is not guaranteed to be kosher. That's why chkdsk (windows) and fsck (*nix) exist in the first place. So the fact that the community hasn't written a fsck for NTFS is not Microsoft's fault; the burden is on the developers who want to provide NTFS support.
A better analogy: suppose Microsoft implemented ext2 in Windows, but not fsck. Is it Linux's fault that you can't use volumes from a hard drive that Linux did not mount properly?
Re:well (Score:1, Insightful)
To be clear: I'm the AC that posted the parent, but not the AC before that. :-) So the scheduler analogy wasn't mine.
Question, though, about this notion of creating intentional obstacles for third party implementations... You don't think the complexity in NTFS merely grows out of engineering problems MS had in developing it, or maybe says something about filesystems in general? When they started in the earliest days of NT (1993ish?) I don't think they were thinking about how to screw over Linux. Likewise, over the years of maintenance and features added since then I think they'd probably focus more attention on hacking it enough to make it work at all, especially with the legacy baggage it has.
I believe I read in a blog about strange designs in MS... Where it's not necessarily that they're purposefully trying to design cryptic file formats and obfuscate them, so much as maintain strange conventions that were optimized for 1993 machines and get carried over from release to release.
Now, what you say about being more open, I do think that's a fair point and applies. But, as for motives, and attributing MS's actions in 1993 to their current attitude towards open source... I'm not sure that's the first explanation I would think of for that.