Home Messages Index
[Date Prev][Date Next][Thread Prev][Thread Next]
Author IndexDate IndexThread Index

Re: Why Linux is more secure than Windows, and why it runs better...

[H]omer <spam@xxxxxxx> espoused:
> Verily I say unto thee, that tblanchard001@xxxxxxxxx spake thusly:
>> http://www.theregister.co.uk/security/security_report_windows_vs_linux/#bursting
> [...]
>> Finally, a monolithic system is unstable by nature. When you design a
>> system that has too many interdependencies, you introduce numerous
>> risks when you change one piece of the system. One change may (and
>> usually does) have a cascading effect on all of the services and
>> applications that depend on that piece of the system.
> 
> Any well designed system (monolithic or otherwise) will work well, the
> problem is maintaining monolithic systems is infinitely more difficult.
> As far as it possible to tell without access to the source (anecdotal
> evidence), Windows is neither well designed nor well maintained, and
> also (and perhaps more importantly) seems to be constantly compromised
> by some perceived need to inhibit user control and access, usually
> driven by demands outside the developers' control (i.e. MPAA), but also
> at the behest of upper management (i.e. change SMB to "f**k with Samba").
> 
> Microsoft are their own worst enemy, and having started out on a weak
> foundation, continue to further weaken it in the name of Draconian
> restrictions. The fact that they seem inept at designing and maintaining
> even the fundamentals, only exacerbates the problem, but then given the
> business methods employed by those who steer the company, the developers
> are immediately at an unfair disadvantage. They have my sympathies.
> 

The problem with vertically integrated systems, particularly if they're
coupled with other vertically integrated systems across proprietary
interfaces (eg., windows desktops to windows servers, as an example), is
that the interactions become too complex to model.

This leaves anyone working on systems with an increasing fear of
changing anything /just in case/ it breaks some other component in an
undefined way.

What's not really brought out in the Register's very excellent article
above is the amount of testing which takes place on open-source
systems, modules, libraries, applications, and so on.  Since this is
all publicly documented (just look at the Debian website as an example),
it's been kind of accepted, but in reality, it is of major importance.
In principle, it could be very easy indeed to break a modular system,
just look for the module which has most dependencies (libc, kernel vfs
subsystem, etc.) and break that bit.  The reason why this hasn't happened,
or at least, has not happened so often, is that the degree of testing
which takes place is phenomenal - debian has a rule regarding package
stability which must be met /before/ a package can make it to testing,
and similarly, before it can make it to stable.  Once in stable, there
is no tinkering, only security fixes are applied.

This leads to a massively stable system, and in particular, discourages
the tendency to bodge one bit of the system to fix flaws in another,
something common in vertically integrated systems, if the "newer" piece
is considered less "central" than the older one.

The Register's article is so good, I wonder if we should consider adding
to the FAQ with a link to it?

-- 
| Mark Kent   --   mark at ellandroad dot demon dot co dot uk          |
| Cola faq:  http://www.faqs.org/faqs/linux/advocacy/faq-and-primer/   |
| Cola trolls:  http://colatrolls.blogspot.com/                        |
| My (new) blog:  http://www.thereisnomagic.org                        |

[Date Prev][Date Next][Thread Prev][Thread Next]
Author IndexDate IndexThread Index