On Aug 10, 12:50 pm, Linonut <lino...@xxxxxxxxxxxxx> wrote:
> * Homer peremptorily fired off this memo:
>
> > This is where the argument that "proprietary = professional = better"
> > completely falls flat, since there is absolutely nothing "professional"
> > about the utter garbage that I've just spent the last few hours battling
> > with.
I remember in 2000, shortly after everybody started breathing because
computer systems didn't shoot nuclear missles from outer space due to
the Y2K bugs, there was suddenly a glut of Windows and COBOL
programmers who wanted to get the jobs as Web Developers.
Many of them were able to learn Java, or even Unix and C and C++, but
they also brought their bad habits and Windows-limited thinking with
them. They could not grasp the concept that Unix didn't need to
resort to massive multi-threading to get performance, that a fork()
was sufficient. The result was that there were some hideous Java and
Unix based server projects that were being programmed just like
Windows projects, and having exactly the same problems Windows had.
I have seen similar problems when Windows desktop applications are
converted to Java and/or OSS APIs. The developers try to put the
grapes and yeast into the wine-skins and wondering why they don't get
the wonderful performance of Linux/Unix. Even OS/X had some first
release "transition pains" as developers learned to adapt to the
capabilities of Linux/Unix and Java.
There are fundamental differences that go all the way back to the
roots of both systems back in the 1980s.
UNIX, especially BSD 4.2 and later, were multiuser systems. The
operating system had to mediate all shared resources, and the kernel
was specifically designed to minimize the contention between resources
and processes, including foreground and background user shells.
Shared resources were accessed by sending requests to servers, often
via UDP or via message queues. Message queues had been used by IBM
since the late 1960s, also to mitigate the contention between
resources. There were also Unix domain sockets, which provided memory-
to-memory connections to servers for access to shared resources
ranging from simple file "databases" to complex servers like DNS and
X11.
As a result, there was very little need to focus on these lower level
details, and developers could focus on functionality and
presentation. For most of the UNIX community, the goal was to serve
as many users as possible, as efficiently as possible. Even when
users used X11 servers and client applications, everything was
designed to be efficient and clean, with the GUI interface generating
a series of "transactions" or "streams" which could be parsed by
servers, and streams sent back to the GUI interfaces to provide near-
real-time response and accuracy. This is why UNIX was used for real-
time sensitive information such as stock tickers, investor news feeds,
and monitoring systems for everything from telephone switching systems
and electrical switching grids to Nuclear reactors and transportation
systems including railroads, air traffic, and even automobile traffic,
in real-time. This gave Unix equipped users the ability to rapidly
respond to sudden changes with instinctive real-time accuracy and
speed, avoiding potential crisis situations with sometimes split-
second reactions.
Furthermore, with servers such as X11, it was possible to create
interesting complex applications by simply combining a set of simple
applications into X11 "Frames".
In Microsoft world, the operating system did as little as possible,
and the developers managed everything within their process, and they
were limited to a very small number of "processes" including the
"Terminate and Stay Resident" (TSR) processes. The application
managed it's own memory, and some applications such as Lotus 1-2-3 and
Borland SideKick even had their own multitasking. Some applications
even used overlays and swapping.
Over the years, Microsoft would incorporate more and more of these
application developer designed functions into MS-DOS and later
Windows. At the same time, Microsoft focused on the features that
would give them the biggest "Bang for the Buck" in the single-user
desktop environment. As a result, things like preemptive
multitasking, effecient memory management, high speed context
switching, and security were very low on the priority list, while
things like really pretty Icons, wizards, and carefully orchestrated
GUI interfaces such as those found on Word, Excel, PowerPoint, and
Project were carefully designed to be just slightly better than those
of WordPerfect, Lotus 1-2-3, and Corel Draw, but only just enough
better to seduce users into dropping the established competitors, and
switch to the Microsoft products, after which, there was very little
further effort applied.
As a result, most Microsoft applications had to have all functionality
combined into a single process. EVERYTHING had o be compiled or
linked into that single process, including help, initialization,
exception handling, and of course, the interaction between threads
within that process. Of course, the applications could make use of
shared libraries, called Dynamic Link Libraries (DLL), but even those
had to be carefully managed because many of the classes and methods
were not fully reentrant, so the application had to manage the
interactions with these classes to prevent race conditions and
deadlocks.
Eventually, Windows and UNIX experienced a fusion into what we now
call the Internet or the World Wide Web and Web 2.0. With Windows
clients being interfaced to UNIX servers, using a client/server or
client/middleware/server interface in which forms were transformed
into requests to the server. The server then send back a response
which could the be displayed by the GUI interface. Some of these
interfaces include Web Browsers, E-mail readers, Instant Messaging,
and even charting and streaming video.
Of course, to do this, many of the interfaces had to be "Dumbed Down"
for Windows. For example, the Web couldn't handle full-blown SGML at
the time, so the UNIX community came up with a minimalist subset we
now know as HTML. Linux and Unix could handle far more complex
interactions and documents using REAL SGML, but Windows would just
crash if it tried. Eventually, a comprimise between SGML andn HTML was
adopted called XML, which allowed the use of very simple SAX parsers
or the more complex DOM parsers, but even the SAX parsers were "Dumbed
down" for Windows.
Instant Messaging actually started on UNIX as "Talk" and allowed to
UNIX users on the same server to "talk" to each other, even though
they might be in different offices, different floors, different
buildings, or eve different cities. Later, Internet Relay Chat
provided the ability to connect to public servers and interact with
lots of people who would join "team rooms". You could enter a team
room, ask some questions, and someone would "whisper" and invite you
to a private chat, where you could get your questions answered. It
became a popular hangout for UNIX and Linux administrators. Later, an
LDAP directory was used to help identify who was on, and provide each
user with the ability to make cheir own "buddy list". The early Linux
implementations kept the two functions separate, with the ability to
"whisper" to someone who might not even be in a chat room at that
moment.
The Windows version had to be dumbed down because the client had to
support both the LDAP directory and the IRC functionality in a single
process. And again, all of the help, exception handling, and other
"house-keeping" functions had to be compiled into the applications.
> > I'm referring to Adobe Reader for Linux version 8.1.2, which is provided
> > as proprietary freeware in an RPM package.
Adobe is a Vendor who has mostly written Windows and Mac clients for
almost 20 years. Acrobat was Windows based for many years, and didn't
even come in a Unix or Linux flavor. Unix users had ghostscript and
gzip or bzip, which could provide compressed postscript that could
easily be displayed to Linux/Unix GUI users. There is even a pdf
rendering tool which makes use of simpler processes, and has been
"good enough" for quite some time.
Many Windows application vendors go through some "adjustment pain"
when making the transition to Linux/Unix. Many things they had to
manually code into the application process are already available as
fully functional processes. Other things can often be done more
efficiently, and many of the help and exception handling fuctions can
be moved to other processes entirely, and invoked only when they are
really needed.
> > I recently decided to compare features and PDF rendering on three PDF
> > viewers, Adobe Reader; Evince and KPDF. As it turns out, all three are
> > pretty comparable, other than the fact that Adobe Reader is somewhat
> > slow; bloated; and (as I later discovered) has a badly broken uninstall
> > RPM script.
>
> acroread is /very/ slow and bloated, though it has been quite awhile
> since I've installed it because of its bloat.
For the reasons I just explained above, this is understandable.
Eventually, Adobe will find the people with the skill set to optimize
applications for Linux/Unix (including Mac), and they will stop trying
to "Fight" the operating system. Remember, when applications start
trying to write their own memory "heap" managers, they end up fighting
the memory management system of the kernel. Suddenly memory has to be
mapped, remapped, and each remapping requires context switches and
other conflicts. Allocating object pools, and letting the kernel map
the memory from it's pool of large or small buffers will allow the
kernel to optimize the memory use and mapping, including the passing
of buffers between processes in the form of interprocess
communications.
> > It's when it comes to uninstalling Adobe Reader that the "fun" really
> > starts though. The first clue that something went wrong was when the
> > "rpm -e" command took an inordinate amount of time to complete,
> > certainly longer than I expected. As I later discovered, this was due to
> > the "%post" scripts that formed part of the package, which are executed
> > "post" uninstall, ostensibly to restore the system configuration to its
> > previous state. What these scripts /actually/ did was completely wipe
> > out the system MIME settings for PDF, requiring me to manually restore
> > them using RPM and KDE Control Centre:
In a Windows context, that makes sense. If you have registry entries
pointing to non-existent DLLs and classes, some unpleasant things can
happen, from sudden termination of the application to Blue Screen of
Death. I just recently tried to install a fresh copy of Windows on a
new drive, and used a Backup/Restore utility to back up settings,
applications, and files from the old drive to a USB drive, then
restore them to the new drive. It seems that the Backup didn't copy
anything from "Program Files" and when I tried to run the "Recovered"
drive, I suddenly found a previously reliable version of Windows
giving me blue screens within as little as 15 minutes.
It seems that the only way I can reliably move this configuration from
a 5400 RPM drive to a 7200 RPM drive is to have VMWare converter
create an image, and then run Windows XP as a VMWare client. It did
give me a significant increase in speed, so it was worth the effort.
> >http://forum.fedoraforum.org/printthread.php?t=181598
>
> > I then discovered that Firefox was unable to handle PDF files correctly
> > too, since the Adobe RPM had failed to clean up garbage it left behind
> > in the plugins directory (nppdf.so). Even then, I still had to force the
> > plugin cache to refresh, by deleting ~/.mozilla/firefox/pluginreg.dat
> > before I could finally download PDF files again using Firefox.
Again, this makes sense. FireFox doesn't depend as heavily on a
"registry" which means that if Adobe searches the registry
(Xrdb, .Xdefaults and app-defaults in the case of Linux/Unix), it
won't find the FireFox configuration contexts.
> > What a bloody mess, and all thanks to Adobe's non-standard and broken
> > RPM of their proprietary Slopware.
I wouldn't be too hard on Adobe. Many people like having Acrobat to
read PDF files, and their relative success with Acrobat gave them
enough courage to do the Flash port too.
It will take some time for Adobe to "find it's way around" on Linux,
and to learn to do things the "Linux Way". It sounds like FireFox
might want to look at doing a sync of it's configuration files with
the X11 configuration files as well. Simply putting the properties
in .Xdefaults provides the user-level configuration control desired.
> It's amazing how many commercial vendors just don't take the time to get
> it right.
It even took IBM a few years to get it right. Look at some of their
Lotus Notes solution attempts. It's a good study in what NOT to do.
Notes 8 is a good example of how to do it right.
|
|