Home Messages Index
[Date Prev][Date Next][Thread Prev][Thread Next]
Author IndexDate IndexThread Index

Re: Open Source Re-use Makes the Difference

Roy Schestowitz wrote:
> Open source can be recycled
>
> ,----[ Quote ]
> | But with open source, you have the capacity (if you can do it yourself
> | or find someone else who will do it) to lift what you like about an old
> | program and stitch it together with a new one. Code isn't lost. It's
> | reused.
> `----

Actually, it gets even better than that.  Because these open source
applications and libraries are available in a well-documented and
modular form, it's possible to reuse the known, working, hardened
components, without having to modify them or rewrite them.

One of my favorite examples is WinZip for Windows vs Linux
implementations of similar functionality.

In Windows, all of the application components must be compiled together
into a single executable component.  You can use DLLs as shared
libraries, which means that there is a limited degree of modularity,
but there are no simple "shell" interface to these components, which
means they all have to be compiled together.

This made a great deal of sense when Windows did not have preemptive
multitasking, or when Windows NT 4.x could only do about 100 context
switches per second.  But modern Linux systems running on 1 Ghz
Pentiums can do about 1 million context switches per second with very
little effort.  Even Windows 2000, Windows 2003, and Windows XP can do
several thousand context switches per second.  This is why cygwin style
applications are now practical for Windows XP and Windows 2k+.

Linux and Unix have always had very fast context switches, and some
hardware chips such as the Power PC, HP PA_RISC, MIPS, and CCI_Power
chips even had special hardware accelerators to make unix context
switching even faster.  Even on Intel, a *nix context switch on BSD or
Linux only takes about 1000 instructions.

As a result, Linux and Unix applications are often written in terms of
a number of simple applications which can be invoked using a simple
"shell" interface.  In some cases, these shells can even be graphical.
Python, TCL, and Ruby are all equipped with graphical user interfaces
which allow customized applications to be written with very little
effort (relative to "compiled" solutions for Windows based on C++, VB,
or C#.

Sadly, Microsoft has never been willing to throw in even the most basic
compiler, which means that most of the real power of Windows XP/2K has
really remained untapped.

Back to my example of WinZip.

WinZip is actually based on 3 unix applications.  The first, ar, has
been around for years and was originally used to create libraries of
compiled binary object files.  The second "compress" has been available
in a number of flavors and has evolved over the years.  The compression
used in Zip was the Limpel/Ziv compression algorythm.  The third
element was the "shell" which could indentify the files to be
compressed, compress those files, and store them into the archive file.
 Finally, the fourth element was the graphical interface.  This allowed
the user to view the available files and directories, select the items
to be added to the archive, and then past that list of paths to the
"shell".

In Windows, all of this had to be compiled together.  As a result, if a
more effecient compression algorythm such as bzip came along, you had
to wait for a future release to include it.  Depending on economic and
political motivations, this could be a very long wait.  If a better
indexing structure was available for the "ar" function, you might have
to wait for several releases to see it.  And if you didn't like the
GUI, let's just say you shouldn't hold your breath waiting for the
newer and better release.

In Linux, the same concept was available.  Linux also supported zip,
bzip, gzip, and compress.  Furthermore, linux supported different kinds
of compression such as jpeg, mpeg, and png.  As a result, one could
create archives including each file, optimally compressed based on it's
content, and have this archive available in such a way that each file
could be accessed and/or indexed.  The shell could now be a simple
shell script, or a trivial for loop for the list, along with a case
statement for each file.  Finally, the GUI interface, which is often
the most subjective part of the program, could be easily tuned to the
needs of different users.  For example, those who just wanted to do
quick and dirty scripts, they could use ksh or bash.  Those who wanted
a cute little interactive fat-client GUI could use TCL/TK, Python, or
Perl/TK.

But then again, other "file browsers" such as konquerer, could also
have this functionality built in.  And finally, the "file system" could
be configured to allow zip files to be opened like directories.
Suddenly these optimized archives are as managable as directories.
Suddenly all applications can open files that are stored within ar
files.

And if we enhance the indexing of the ar files, we can have even better
ways to view the files.  Instead of cryptic short names, we can have
more descriptive names.  We could even index them according to key
words.


>                         http://blogs.zdnet.com/open-source/?m=200608
>
> No proprietary product can compete with the GPL, whose derivatives (unlike
> codebase) grow exponentially for linearly-increasing effort.

Was it Sir Isaac Newton who said
   "If I see farther, it is because I stand on the shoulders of
Giants"?

Linux and open source provide that same opportunity.  I can write new
code, standing on the shoulders of such greats as Bill Joy, Marc
Andreeson, Richard M Stallman, and even Brian W Kernigan and Larry
Wall, and Jim Gosling.

I will probably never be as famous as they are, but I can provide some
good value for a lot of people and make their lives a little easier.

Rex Ballard
http://www.open4success.org


[Date Prev][Date Next][Thread Prev][Thread Next]
Author IndexDate IndexThread Index