Introduction About Site Map

XML
RSS 2 Feed RSS 2 Feed
Navigation

Main Page | Blog Index

Archive for October, 2005

KPilot Versus JPilot/GPilot

KPilot

KPilot screenshot
version 4.3 on SuSE 8.2 (not the version reviewed below)

LINUX Palm synchronisation is a scarcely-explored territory to many. I currently own a Tungsten and I have 3 machines, all of which run Linux. In practice, I refuse to use my Palm for much beyond basic PIM, at least on a daily and thus realistic basis. Even MP3 playback capabilities have lost their appeal, so what I require is a rather rudimentary, but nonetheless user-friendly and reliable Palm-PC interaction suite.

Introduction and Disclaimer

I am fortunate to have had an opportunity to put both KPllot and JPilot on GNOME (gnome-pilot or gpilot) to the test. It seems as if they all trace back to a similar codebase, which includes pilot-link at the very core. My experience with the JPilot front-end is limited, so in my short review I will attempt to focus on default KDE and GNOME conduits — particularly the ones that come ‘out of the box’.

Admittedly, I only used GPilot for several weeks before looking into what KPilot (Version 4.5: “Baby”) had to offer, having glanced at it briefly last year. On equally-modern distributions, namely the GNOME-based Ubuntu Warty Warthog and the (largely) KDE-oriented SuSE 9.3 I have been putting Palm synchronisations to the test. Experimentation was a prolonged one and it covered some crash testing (hard-resets with data loss) and in-depth GUI exploration. This came after 3 years of using Palm Desktop under a Windows 98 laptop and a Windows 2000 desktop, I might add. All in all, there was good basis for inter-platform benchmarks, so critique should not have been a narrow-minded one.

Conclusions and Opinions

My first impression is that KPilot is more comprehensive than others, much like the entire desktop environment is resides in. I personally favour KDE although I use GNOME several times a week and almost every day. Overall, my opinion is not likely to have that ‘desktop environment bias’.

KPilot integrates rather well with its underlying/surrounding environment (e.g. mail client and calendaring software) whereas the GNOME equivalent is rather stripped-down and raw. The latter does what I require it to do and it even attempts to synchronise with Evolution (Outlook-type suite), which has been said to be slightly buggy. Nonetheless, it fails quite miserably at matching that ease-of-use of KDE’s equivalent. Firstly, setup in GNOME requires some hand-tweaking, at least in accordance with my own experience. This can end up consuming plenty of time. I also found the wizards in GNOME to be less helpful, more verbose, and more confusing.

Returning to KPilot, the front-end interface is a more natural one to its user. During synchronisation, all files are ordered with a sensible file structure under ./kde/share/apps/kpilot (or equivalents). gpilot, on the other hand, adheres to a flat hierarchy that provides little context, if any. Moreover, KPilot is largely intuitive, highly-customisable and comes with cleaner and apparently more comprehensive documentation.

Lastly, to those with strong sentiments of disagreement, I must clarify that everything was tested under Ubuntu and SuSE. All of the above observations can be attributed to distributions rather than the Palm conduits and packages. Nonetheless I would choose to go with KPilot as I presently do although I used gpilot at the start. It seems as though KPilot is simply more mature.

Your Palm and That Linux Migration

If you dread the day when you must synchronise your Palm with a new Linux box or a fresh installation, have a look at the very self-contained HOWTO.

Cited by: PalmAddict

Terabit Connection

Optic fibres
The stuff is extremely valuable, not just for toys

BEHOLD the power of fibre-optics. Terabit connection, anyone?

Although we cannot speak of terabit connections yet, transmission throughput via wires is said to have reached that point. I once thought that our 2 gigabit backbone was something admirable, but think what would a 1000 gigabit enable us to do. Cross-site processors? Distributed computing environments that are also global?

A Japanese company has developed technology to transmit a two-hour movie in 0.5 seconds, the world’s fastest speed achieved with fibre-optic cables in the field, it says.

Wow! On my network connection at the University, a film might take a minute or two to transmit; that sure is fast enough for me. Does humanity even have the thirst for higher bandwidth, yet?

Cross-Platform Remote Access

Multiple SSH sessions
Dozens of remote sessions occupying a cluster. Terminals shaded on the left monitor (click to enlarge)

WHATEVER operating system we use, the idea of using remote terminals should not be foreign to us. These days, it is rather common to log on to a computer remotely and manage it from afar as if we were actually there.

Inter-Platform Connections

Windows users frequently stick to VNC, which requires a high-bandwidth connection and grabs the entire desktop/workspace metaphorically ‘across the wire’. Nonetheless, there are some smart algorithms (c/f Citrix clients), which only re-draw elements once they change, so speed/bandwidth might not be the utmost concerns.

Linux users, on the other hand, are capable of establishing transparent connections with Windows machines via VNC, for which there are many applications in existence (Remote Desktop or rdesktop among several more variants). Linux also takes a different approach in its most natural method for remote access. Take, for example, SSH connections wherein individual windows get ‘grabbed’ and communicated over the network, only upon demand. Everything else should be managed from the command-line interface (CLI), e.g. bash and xterm. This might be less natural to the majority of users nowadays, especially to those unfamiliar with CLI’s.

Windows connectivity to *nix protocols can be established using the renowned PuTTY. In the case of Telnet or RLogin, applicability might be slightly different, but merely all protocols seem to have been covered. In fact, Windows typically supports telnet at its core (Start » Run... » telnet » ENTER). This establishes a somewhat mutual relationship. Windows users can remotely log in to Linux machine (might need a commercial X-session if not CygWin) and Linux users can connect to Windows hosts. In modern Linux distribution, all necessary toolsets are already pre-installed from what I can gather.

Extreme Use of Remote Access

When carrying out some computer vision experiments, I was at times using over 30 Pentium 4′s. These were used for quite a considerable overnight resource hogs. The communication barrier was merely inexistent; I an fortunate enough to work upon a 100Mbit Ethernet backbone. To give some indication of how fast the connection actually is, I can transfer an entire CD (~650 MB) across campus in less than a minute. That unbelievable speed is at times truly needed. I estimate that I use up bandwidth of over 100 GB per month, mainly due to backup necessities.

As regards extreme use of remote login, this is one of the most exciting experiences, to me at least. Rather than conducting large-scale experiments over the period of one month on a single CPU, they can be distributed, thus completed within a day. Nothing can beat that in terms of productivity. AI is known to be resources-greedy. Our computer vision methodology falls under that branch too. I will soon write about the use of supercomputers to run my experiments, albeit this is still under negotiation.

Traffic Chain

I could no longer resist my geek spirit so I decided to experiment with the idea of SSH chaining. The dependency of one machine upon another is something that intrigues me, so often I log in remotely to one machine, which in turn connects to another.

I decided to set up a larger SSH chain wherein I connect to my own computer via an entire ‘ring’ of machines, using SSH. I wanted to see how this affects speed and responsiveness in applications that ‘travel’. Needless to mention, this also cripples all computers in that ring. This observation has some interesting implications on its own. These intermediatory machines can be perceived as purposeless routers. If one computer in the chain is reset, the whole chain collapses and the connection is lost. It may also take a while to re-build, which to me at least, is amusing.

Cluster Control

On to some more extreme uses of SSH, some time ago I read about use of SSH to control entire computer clusters in parallel. In essence, the user will be sending any given command to an ‘army’ of computers (clients or computational hosts). The tool is not very flexible, but can be valuable under particular circumstances.

Google-IBM Desktop Search

Google Desktop

AFTER teaming up with NASA1 and forming a pact with Sun Microsystem, Google now sidle nearer to IBM. This move comes to show that Google are recognised as a market leader that is here to stay and further prosper.

SAN FRANCISCO (Reuters) – IBM and Google Inc. are collaborating to make it easier for office workers not only to search for local documents and personal e-mail but to delve deep into corporate databases, the companies said on Friday.

1 Side note: Brin’s mother work/ed in NASA

Related item: Google Desktop Released

Microsoft and Windows Zombies

Bill Gates
Bill Gates arrested in his younger days (photo in public domain)

MICROSOFT are finally starting to investigate zombie attacks, which have been launched continuously due to vulnerability in their flagship product — the ubiquitous operating system Windows. This is long overdue and a real embarrassment as the extent of zombie attacks is growingrapidly. Not only do Windows users suffer as a result of these attacks, but the entire Web does, me included. From the article:

Microsoft is investigating 13 spam operations as part of a call to action against zombies, which are compromised computers used without their owners’ permission to send spam, launch denial-of-service (DoS) attacks, and spread worms and viruses.

Ironically enough, Microsoft attempted to fix this loophole, but did so incompletely! Their patch was flawed. So, for the past 3 weeks, my site has come under attack by over 1,000 Windows zombies per day (never less than 1,000). At some point, the strain was so high that the site and Web host reached the brink of collapse. This could only be described as Denial of Service (DoS) attack, almost leading to my site being Dead on Arrival (DoA). This has been costing me money, as well as plenty of time and distress.

I explicitly promised to mention this aspect of my activities no more. Yet, the 2 articles above sparked my emotions yet again (and Halloween is a couple of days ahead, so zombies are on topic). Why should my pocket be emptied to cover for junk traffic that is spewed out from unprotected operating systems, which were claimed to be strong and and reliable? Why is it that Webmasters are no longer in control over their own Web sites? Those affected would know exactly what I mean.

A lot of my bandwidth gets drained by zombies from all around the world. Consequently, the server slows down and log files become extremely noisy. Spammer-driven zombies are intended to ruin these logs. The effort is affordable as merely any hijacked computer can be exploited to attain brute-force attacks. The only solution is operating system cleansing. While Microsoft attempt to hunt down the spammers/attackers, I think they ought to be responsible for patch up any vulnerable machine. This includes any Internet Café where the owners are simply apathetic to critical patches.

My final message to Microsoft: You got the Internet into this mess to begin with. You made your operating system so easy to penetrate/hijack and the entire world (or a subset of it) suffers. You are now responsible for putting an end to it all and you sure have enough money to put into the necessary operations. I am tempted to use strong language, but I suppose it would be inappropriate at this time and place and would achieve nothing.

I once addressed Windows users with a friendly message on the need to patch up the operating system. If the users are unwilling to do so or find it rather difficult, Microsoft ought to step up and get involved. Otherwise, the Net would soon degrade to a sordid mess involving more and more DoS attacks. Fingers would all be pointing at Redmond and the reputation of Windows, as if it ever existed, be shattered.

Moderate the Moderators

CCTV

Can the men/women behind CCTV be trusted?

REMEBER the a popular phrase “Who is watching the watchers?”

Well, it appears as though, under the umbrella of Web 2.0, where visitors’ involvement is perpetually encouraged, we face yet another challenge: how can comments be moderated, articles ranked and statistics assembled reliably?

Can you truly trust a moderator? Need you ever moderate the moderator? If so, are you not getting into a cyclic moderation trap here? Slashdot have introduced the idea of meta moderation, where moderators can be penalised for unfair treatment of comments. For example, some would be aware of the effects a terrible day has on personality. Even in a peer review process, people are more likely to punish others passionately due to their own personal problems.

Why has this idea sprung to my mind and awoken in my consciousness? WordPress has recently seen some collaborative comment spam filtering, currently known as Akismet. I was involved in testing the plug-in (see my entry on ending comment spam) and I can finally give more details about it.

Comments are marked as “genuine” or “spam” and their status is shipped to a central repository where filtering is administered. At present, API keys are given to trusted bloggers. This is a handicapping property of the service, which may exclude many. It may even be interpreted as insulting to some. The API keys are intended to keep spammers out of any ability to flag comments badly, thus collapsing the system and breaking everybody’s long-taught filters. Essentially, there is the possibility of filling the engine with noise, which makes it utterly unusable.

Finally, the scenario above begs for the question: how can the moderators — the test set which flags spam — ever be moderated if there is no trust or moderation atop the moderators? We are yet to discover where it will all end up. I believe they may be an exclusive ‘army’ of flaggers while the remainder will be just client of the Akismet filtering engine.

New Zealand Goes Open Source

Roy loves SuSE

SuSE Linux, my favourite distribution among all, is set to take over in New Zealand.

A deal has been struck to allow agencies throughout government access to open-source software and support from Novell at preferential rates.

The deal will centre on Novell’s SUSE Linux server and its Linux Desktop, but agencies will also be able to access other open-source products such as MySQL and Jboss with full support, says Novell NZ general manager Matthew Christie.

This comes at a time of some financial trouble and expected lay-offs, which nonetheless should not affect SuSE’s roadmap.

Novell layoff rumors swirled this week, but analysts were quick to dismiss any doomsday scenarios involving the SuSE Linux operating system.

Retrieval statistics: 18 queries taking a total of 0.146 seconds • Please report low bandwidth using the feedback form
Original styles created by Ian Main (all acknowledgements) • PHP scripts and styles later modified by Roy Schestowitz • Help yourself to a GPL'd copy
|— Proudly powered by W o r d P r e s s — based on a heavily-hacked version 1.2.1 (Mingus) installation —|