Roy Schestowitz wrote:
> ,----[ Quote ]
> | Viruses, Trojans and other malware make it onto Windows desktops for
> | a number of reasons familiar to Windows and foreign to Linux:
> |
> | 1. Windows has only recently evolved from a single-user design to a
> | multi-user model
Compared to Linux, which was based on a multi-user model from the
initial stages. Even when Linux was used as a workstation, it was
designed with the assumption that services such as Web Servers, e-mail
servers, and file transfer servers might be activated. In addition,
Linux used X11, which was originally designed to allow multiple X11
terminals to be connected to a shared "workstation server". Again the
assumption was that remote access would be at least possible.
Early Linux distributions did not set a root password and did not
require one to be set, but after a Linux worm (Lion) successfully
attacked a few thousand improperly configured computers, all
distributions have the user set a password and create a separate user
ID.
Since the beginning, all Linux filesystems have provided user, group,
and world permissions, and the ability to run scripts in "setuid" mode.
In this way a funtion or application such as a script to mount certain
disks, could be started by a specific group of people - without having
to give them administrative rights or root passwords.
> | 2. Windows is monolithic, not modular, by design
Ironically, this has been shifting over time. Microsoft has moved more
and more of the functionality from the core kernel to secondary
processes. Even Microsoft is beginning to see the advantages of true
multitasking. Microsoft made some radical enhancements in the
transition from NT 4.0 to Win2K, The reduced the process to process
context switch time from 100,000 cycles to 10,000 cycles. Linux and
UNIX context switches average less than 1,000 cycles, and Microsoft is
working on a similar kernel that might be out around 2010.
The more quickly one can switch from one process to another, the lower
the cost of splitting applications into multiple processes. Early
Windows NT applications were huge, often every possible function was
compiled into a single executable. Today many of the "bundled up"
applications are now being offered as applications with "plug-ins".
This reduces the size of applications and eliminates the need for
memory to hold "dead code", code that will never be executed or will
only be excuted once.
> | 3. Windows depends too heavily on an RPC model
The problem isn't RPC, but what that RPC is allowed to do. Linux has
RPC, Corba (Bonobo and EJB), and remote shell capabilities, but all of
these can be secured in a manner which limits what the RPC is allowed
to do.
The cardinal rule of security is that you don't download software
unless you know how you got it, who you got it from, and what it is
supposed to do - many administrators want either a major corporate
origin (someone they can sue), or source code. Since most companies
require the end-user to assume liability and indemnify the origin
company, the only truly "safe" course is to have and review the source
code, compile that code yourself, and put it in directories that only
an authorized administrator can alter.
The REAL problem with Microsoft's model is that none of them require
any rigid authentication. The authentication entities that are
available have extremely limited liability. The remote procedures can
do ANYTHING, including install other applications without the user's
knowledge. Finally, all of these remote applications tend to give full
authorities - giving these remotely initiated and remotely executed
applications (Web Server running a program on your PC) full access to
any file, resource, or server available on that PC. This is also a
problem when running unsigned Java Applets on Windows.
Linux on the other hand, uses Unix and CORBA authentication and
authorization mechanisms, limiting what can be done and who can do it.
Furthermore, the remote invokation is limited as to what files and
directories can be accessed. It's even possible to "chroot" a service
so that it can ONLY see the directory assigned to it.
Linux also provides additional security by eleminating the need to give
end users administrative rights. As a result, unintentially installed
applications that do manage to get in and get executed would only be
able to access that user's directories.
Finally, Linux supports back-up and recovery strategies that make it
easier to recover when a user's directory has been attacked. ALL
settings, profiles, and personal files as well as modifiable files are
ONLY found in the user's subdirectories or the /tmp directory.
> | 4. Windows focuses on its familiar graphical desktop interface
> `----
The biggest problem with this in terms of security is that server
administrators end up logged on as administrators, executing graphical
applications - and it's very easy for people to forget a critical step,
deliberately ignore the step, or just forget what they did. This is in
addition to running as administrator and potentially having the machine
hijacked.
> http://www.theregister.co.uk/security/security_report_windows_vs_linux/
This article is excellent. Nice summary of things I've been saying for
years.
> ,----[ Quote ]
> | Astute observers who examine the Netcraft web site URL will note that all
> | 50 servers in the Netcraft uptime list are running a form of BSD, mostly
> | BSD/OS. None of them are running Windows, and none of them are
> | running Linux. The longest uptime in the top 50 is 1,768 consecutive
> | days, or almost 5 years."
> `----
One of the key reasons for the long uptimes is because BSD kernel has
needed very few changes. Most BSD machines don't requre kernel changes
for years. Usually the only time the machines need to be rebooted is
because hardware overheated.
> https://www.redhat.com/archives/fedora-list/2004-October/msg04658.html
Good point here - The "uptime" routine rolls over every 497 days.
That's almost a year and a half. But even then, this may be the second
or third time that it has rolled over.
The other thing is that Linux has had a number of kernel upgrades to
handle faster scheduling, better support for SMP and NUMA processors.
There have also been numerous upgrades to the libraries and of course,
new drivers to support even faster machines.
|
|