On Nov 20, 5:26 pm, "ness...@xxxxxxxxxxxxxxxxxxx"
<ness...@xxxxxxxxxxxxxxxxxxx> wrote:
> <Quote>
> With its first public beta of Microsoft Windows HPC Server 2008,
> released last week, Microsoft coincidentally highlighted one of the
> reasons why Windows Vista adoption figures have remained near-non-
> existent in the enterprise -- its expensive hardware requirements.
> </Quote>
>
> http://www.computerworld.com.au/index.php/id;2091560903;fp;2;fpid;1
I found this passage interesting:
<blockquote>
Vista, on the other hand, is intended for home and office desktops. On
top of the 1Gb minimum memory requirement, Microsoft recommends 2Gb or
4Gb to achieve the best experience.
</blockquote>
4Gb to achive the "best experience" compared to XP which provides very
good performance at 512 Mb and gives very little benefit at 1 Gb due
to hard drive delays.
Microsoft often generates a huge market for Linux because a major
upgrade usually surpasses the capacity of the motherboards designed
for it's predecessor. Windows 3.1 only needed about 4 meg, but
Windows 95 needed at least 16 meg of RAM. Upgrading to Windows NT 4.0
required expanding memory to at least 64 megabytes to get a reasonably
reliable system. The Upgrade from NT 4.0 to Windows XP required an
upgrade to at least 256 meg but most people needed at least 512 meg to
get reasonable performance. And now, Vista is requiring a motherboard
capable of supporting at 4 gigabytes.
The problem with these huge memory requirments in server environments
is that it limits your ability to virtualize. If I only need 256 meg
for a Linux VM to get good server performance, and I need 2 Gig to get
the same performance from Windows in a complex application
environment, I can take a server with 4 gigabytes of RAM and configure
2 Windows VMs or 32 Linux VMs from the same server. In a 12 blade
array with 8 gig per blade I can get almost 700 Linux servers or 24
Windows servers. Can you see why this might be a problem? Even if
you don't virtualize, you can often use 32 java JVMs to run 32
concurrent applications, or otherwise balance your load to get more
"bang for the buck".
In practice, you might opt for more concurrent users on each Linux
server, more functionality in each server, or use of message queues to
optimize the load across a group of forked server processes.
Earlier today someone posted a tirade against the use of scripts and
text based command structures on Linux. One of the advantage besides
the reduced labor costs of being able to write these "shell commands"
into cron jobs, server startups, or other "no labor" maintenance
tasks, is that you don't waste huge amounts of RAM, CPU power, and
memory bandwidth trying to support high resolution multicolor
graphics, help and exception handling that may never be called, and
other "overhead" can increase the server count required to implement a
Windows based solution.
Perhaps this is why Windows is becoming less popular as a platform for
new solutions, while Linux is now often becoming the first version.
|
|