Home Messages Index
[Date Prev][Date Next][Thread Prev][Thread Next]
Author IndexDate IndexThread Index

Re: Virtualisation Got Simpler Than Ever, Linux Installation As Well

  • Subject: Re: Virtualisation Got Simpler Than Ever, Linux Installation As Well
  • From: "Rex Ballard" <rex.ballard@xxxxxxxxx>
  • Date: 20 Jan 2007 09:49:17 -0800
  • Complaints-to: groups-abuse@google.com
  • In-reply-to: <3123766.fI8yF4xXxb@schestowitz.com>
  • Injection-info: 11g2000cwr.googlegroups.com; posting-host=67.80.98.116; posting-account=W7I-5gwAAACdjXtgBZS0v1SA93ztSMgH
  • Newsgroups: comp.os.linux.advocacy
  • Organization: http://groups.google.com
  • References: <3123766.fI8yF4xXxb@schestowitz.com>
  • User-agent: G2/1.0
  • Xref: ellandroad.demon.co.uk comp.os.linux.advocacy:485148
Roy Schestowitz wrote:
> Virtualization: Simpler Than Ever
>
> Like dual-boot, hypervisors are becoming something a newbie can handle.
> Ubuntu will soon be installable from a Windows installer (includes
> partitioning). See below:
>
> http://searchopensource.techtarget.com/tip/0,289483,sid39_gci1238129,00.html
> https://wiki.ubuntu.com/install.exe/Prototype
>
It's rather interesting that so much of the focuse of the press related
to virtualization seems to be around servers.
The irony is that Virtualization on the desktop is a much bigger story.
 Virtualization of the server has been around for a LONG time.  VM was
first created back in the late 1960s, and was used to emulate multiple
IBM DOS systems.  Later, VM was enhanced to support multiple MVS
systems and what quite popular for companies who wanted to do
development using virtual machines so that the developers didn't stomp
all over each other.  Around 1992, IBM created an AIX VM for the ES
9000 which became very popular on VM.  OS/390 was a formally supported
implementation of a much more effecient VM with both UNIX and MVS
modules available.  In the late 1990s, some MVS guys got the great idea
to emulate System 370 architecture on Linux, the project became known
as Hercules.  Another bunch of VM programmers looked at the possibility
of creating a Linux VM to run under the new ZVM included in the Z/OS
operating system for the Z Series.

Just about the time that everyone was assuming the mainframe was on
it's last legs, Linux for Z-Series created a whole new market.  It was
very popular among server farm companies, who could provide lots of
LAMP servers on a single machine.

But late in 2005, VMWare released VMWare player.  VMWare had a popular
virtual machine for workstations, but it was expensive, as much as
$150/copy.  You could download it for an evaluation period, but their
enforcement was pretty good.  The problem was that not every wanted, or
needed, all of the abilities to create and reconfigure virtual
machines.  VMWare player made it possible to use VMWare Workstation to
create a VMWare image, using backup images, VirtualPC images, or just
creating a virtual machine and installing the OS, and then saving that
image in a format which could be used by player.

Due to copyright issues, it wasn't practical to pass around lots of
copies of Windows images, but it became really easy to pass around
preconfigured Linux images.  Soon, major commercial software vendors
were offering images of their products, all properly configured for
evaluation, proof of concept, or pilot projects, in a "ready to run"
image "Appliances" which could be started just like any other
application.  Linux distributors began offering appliances containing
their distribution and the most popular applications, ready to run for
a test drive.

The nice thing about the VM Appliances was that you didn't have to
worry about your hardware, if you had a pentium, or AMD-64, or Intel
Duo, you could run the VMware appliances.  As a result, many users
began to experient with VMWare Player on Linux.  Using Linux as the
primary operating system made it easier and faster to start the
appliances, which also ran faster on the Linux infrastructure due to
faster context switching and more efficient memory management, making
it easier for the VMs to get the memory they needed quickly.

The other nice thing about VMs was that they were easy to back up.  If
you used the right back-up software, you could install Windows, use
VMWare workstation to convert the backup image into a VMWare image, and
keep the "virgin" copy as a primary backup.  With external USB or
Firewire drives, or SAN storage devices, you could archive several
versions to large multigigabyte disks, and, most important, be sure
that you could recover to any one of these backups.

This was also nice when you had a situation where you had library (DLL)
conflicts on Windows.  You could start with your 'virgin' copy and
create a new image for running the software that wouldn't run before.
For example, when I needed the ability to run WBI 6.0, RAD 6.0, and
Rational Software Architect, there were DLL conflicts.  Uninstalling
the WBI software made the drive unreliable.  Fall back to one of the
back-up images, and now I can have an image for RAD, an image for RSA,
and an image for WBI.  Of course, since IBM also has Linux versions, I
eventually just created Linux versions of each image.

Xen is nice too, but you still have to have Windows installation CDs to
be able to create a Xen  image.  The problem is that most XP systems
were sold with a "recovery partition" which doesn't work well for
installing Xen images.  In fact, it doesn't work too well when the hard
drive get corrupted, or fails, and must be replaced either. Most OEMs
are now offering installation media with their PCs again.

One of the interesting phenomenon as well was the evolution of "Desktop
servers".  These are desktop machines, which are usually connected to a
PC using ethernet or high speed WIFI, and provide a Linux "desktop"
interface to Windows users.  VNC has been around for a long time, but
the recent trend to simply replace virus corrupted PCs with new ones,
created an abundance of corrupted windows machines, which could be used
as Linux machines.  Even cheap laptops became good Linux machines.  You
could by a good machine with a high resolution and get that one with a
Windows license, and use that machine to remotely access the older and
cheaper machine which had lower resolution displays.

The irony is that many of the newer high resolution display machines
are turning out to be purchased as Linux machines.  The Z-60p, for
example, has a WUXGA display, but using a FireGL graphics card for
optimal display of Linux, not Windows.  Combined with the dual-core
processor, 64 bit registers and addressing space, and high speed memory
bus, it's very easy to see why you would want to run 64 bit Linux, with
64 bit Linux applications, and use an XGL display.  For those few
Windows applications you still need, you buy the machine with the OEM
Windows license (XP Pro) and installation media, and reinstall it as a
Xen client.  It's a very elegant way to get all of the advantages of
both a high grade commercial Linux and a good version of Windows, on
the same machine.

Don't have Windows XP for the machine?  Dig out those NT 4.0 disks, or
Windows 9x disks.  The virtual machines do a great job of fooling these
operating systems into thinking that they are running on old reliable
Windows hardware.  Remember, these licenses WERE transferrable from OEM
to other machines, as long as you formatted the hard drive on the OEM
machine so as to permanently remove Windows (install Linux or FreeDOS).

I can count the number of applications that require using a Windows VM
on one hand, and these are very occaisional uses.   A travel site,
Mercury Test Director web interface, Quicken, the occaisional word
document that won't open with OpenOffice (I can use Works), and
Micrososft Project.  The irony is that there are Linux equivalents for
all of these applications, written to the UNIX, Java, or OSS API set.
We just have to get managers to be willing to give up their favorite
excuses for failure. :-)


[Date Prev][Date Next][Thread Prev][Thread Next]
Author IndexDate IndexThread Index