Introduction About Site Map

XML
RSS 2 Feed RSS 2 Feed
Navigation

Main Page | Blog Index

Archive for the ‘Programming’ Category

Name Collision

Car crash

When names collide

BACK in 2002, I chose to work on a project which dealt with the game Othello, also known as Reversi. To put rigour into development, I chose a name for it. I did so without paying much thought to any future potential. A careful and exhaustive investigation of name collisions simply did not seem worthwhile at the time.

Having searched the Web at a shallow level, I did not know of any name collisions when I chose the title “Othello Master”. I even explained about the choice of the name in my report and proposal. It refelects on the way I viewed the choice of the name at that time:

This project has been set to produce an application which will be titled Othello Master due to some visual similarity to an older game called Chess Master. It will require knowledge of game theory and advanced computer graphics.

The name was therefore conceived in a most innocent way. It was only less than a year ago that I became aware of a name collision, for which I am to blame. Search engines had revealed a game from the mid-eighties, which suddenly resurfaced in results from archival pages. It ran on the Amiga, but perhaps on other platforms too.

I sometimes wonder if I should get a hold of this game and play it. Mine is Open Source and GPL‘d so no-one is prevented from playing it for free. In fact, the downloads page is always there for those interested. As for the number of downloads, I believe it itches 1,000, but I rarely keep track of the numbers. It can run on all platforms and there is even a Windows executable.

Other items on Othello Master:

Mathematical Equations in HTML

Equation in German

As an arbitrary set of useful links:

Nice examples like the following equations have been put together:

∆u  =  Δu  =  n

i = 1
2u/∂xi2
 +∞

−∞ 
exp(−x2) dx   =  √π

The above is valid HTML, but as expected, it makes heavy use of tables although data is not truly tabular. From a standards perspective, this may be frowned upon.

While on the issue of rarely-used symbols, below lie my related ‘pocket links’:

Separating Content and Layout

MODULARITY or compartmentalisation, at least in the context of programming, are important and fundamental concepts. They accommodate for easy extension or change of any piece of software. In the context of Web design, elements to be dealt with apart are layout and content, as the former virtually ‘wraps’ the latter.

Design for the Web and typesetting must be treated just like programming. Almost no exceptions. Why rule out WYSIWYG paradigms and interfaces? Because Abstraction is lossy. Since even bloated application can never faithfully read one’s mind, they result in mistaken outcomes which make assumptions that should not be made. The issues are also ambiguity-driven. Below lie a few examples:

Cascading Style Sheets (CSS): What is the motivation? The idea of maintaining one single file to rule them all — a magic wand to manipulation of files, which facilitates consistent change of layout in a variety of pages. Separate content from presentation is the core rationale. This accommodates for flexibility that is inherent in themes. Styles adhere to a given standardised mechanism/syntax, much like API‘s in the programmatic domain.

If you develop a software product, you can have someone else extend your code, add themes or utilise your services, e.g. Google Maps API, which are now trailed by Yahoo equivalents. Also worth noting are the number of Firefox and Thunderbird themes and extensions (plug-ins). WordPress provides a genrous number of plug-in ‘hooks’ as well, which encourages outside contribution. All extensions are attributed to good support to foreign involvement and modularity which separates the trunk of an application from its add-ons.

Hand writing on paperLATEX: TeX styles are a similar scenario where consistent styling is easily achieved. Change can be applied to entire document where semantic structure has been embedded by the author. This, in fact, is why many books and eclectic conference proceedings are assembled using LATEX.

Fortunately, more and more people begin the comprehend the value of this approach and the mantra which is not only inter-operability-motivated. It is also guided by the emerging importance of structural composition of documents, which must not differ. In the absence of serapate styles, many files need to be changed consistently, which can become a labour-intensive task. Such operatings are possible, e.g. using recursive search and replace, yet they should be considered nothing but a fallback option.

Google Print and AI

Iuron

I have already introduced my latest Open Source project (namely Iuron, the Semantic Knowledge Engine) a few times in the past. This afternoon I will be meeting who is questionably the father of the Semantic Web to discuss this project. However, I begin to suspect that Google have picked up on similar ideas by now. From the article which can be found in the Edge (among other places):

“We are not scanning all those books to be read by people,” explained one of my hosts after my talk (at Google). “We are scanning them to be read by an AI.”

There is a certain hint in that Web site as to long-term intentions and goals at Google. Overall, I don’t know if Google have beat me to it, but I suspect it was not something too crypic or far-fetched. Anyone in the field of machine learning must have thought about it at one stage or another. I hope that my speculation is mistaken and that Google will stick with naive indexing of unreliable Web content.

Speaking of threat from the giants, Google Reader, being an AJAX-rich Web application for reading feeds, has had direct impact on Feedlounge, which is an application I help test. Ever since the launch of Google Reader, Feedlounge development has been in a rather idle and fragile state. I hope Iuron will tread strongly despite the known perils. It is a non-profit, Open Source initiative after all.

UPDATE: I have been told that knowledge representation might be a greater barrier than I had imagined. My preconceptions regarding its maturity were slightly optimistic.

Selecting or Manipulating Ad Content

THERE are a variety of technique for summarising page content. Excerpts may be considered one of them, metadata in the (X)HTML header might be another. There is also a sharp rise in the use of tags, which can easily infer the ‘theme’ of a page a and can cohesively reflect on trends across sites (confer tag clouds or see image below).

I am discovering more and more services that are beginning to rely on a succinct collection of keyword, much like tags in Technorati, del.icio.us or the new meta search service gada.be. To each page, a concise representation simply gets bound. Prepare for more of that tagging phenomenon to be seen in the future. In its absence, pages become less desirable as they are more bandwidth-consuming.

Tags cloud

Contextual tags cloud in July 2005

Finally, and perhaps more interestingly, advertisements in a page can be made more relevant by using tags, having manually embedded them in the page. This avoids advertisement from appearing where they would become a contextual misfit. Thus far, however, I have only come across support for tag-guidedads in WordPress. As tags are often generated automatically, e.g. derived from the page using scripts/tools, I can envision the same ideas being extended and exposed to the entire World Wide Web. Google AdSense makes an attempt at finding out for itself what a page is primarily about. It does so off-line or ‘on the fly’. Why not involve the user and use his/her knowledge for assistance? That is where tagging, as in the example above, bears tremendous potential.

Desktop Environment Freedom

Desktop with previews

PDF‘s, text files, HTML‘s and
directories in the KDE Desktop with previews
(click to enlarge)

CERTAIN issues arise when habits and user orientation in his/her desktop are interfered with. Desktop environments, installers, filesystem structures, or even platforms in general are often more workable and thus successful if they comply with the expectation of new users. What if these are made too stringent by the developers, however? What if decisions and conventions are voted for without involvement of the end-user?

Many desktop environments, free ones in particular, are not made uniform. The resulting diversity and flexibility leads to difficulties in using somebody else’s settings, i.e. working in conjunction under the same session. Some GNU/Linux distributions include more or less any desktop environment which exists while settings are kept apart and well-separable for each user of the system and each desktop environment. So, where does the the problem lie? Give users more freedom, I suggest, and make that the norm. It is, after all, their own computer and they understand their needs better than the developers of the operating system.

What appears worse than all is the scenario where decisions are arrogantly by the vendor and then enforced. This is often the path that operating systems such as Mac O/S (to its varieties) and Windows do. Give the user some more choice, I say, by adding options for endless customisation that has very few boundaries, if any (Open Source). Give all users the freedom they deserve and allow them to express individuality and adapt/tailor their desktop environment to suit their needs. Menu entries and other widgets will remain unaffected, so documentation, for instance, will not suffer as a consequence.

People work differently on a variety of applications, for a wide variety of purposes. The domains in which they work differ as well. A person working with many open windows at any given time might prefer to have “focus follows mouse cursor”. Contrariwise, to some, this “focus follows mouse cursor” behaviour is highly adverse to habits. To list yet another example, a Web server needs to have a light desktop environment that is less susceptible to breakage and consumes small amounts of RAM.

These considerations are all highly defensible. If Linux was to be deployed in more public clusters, for example, choice could (and should) be given as to which desktop environment should be used. Different strokes for different folks, but all can be catered for provided that disk space is made available. The pertinent settings would reside in the home directory of each user. Thumbs up to Gnome, KDE, and the rest of the self-motivated window manager teams around the world.

Related item: KDE Versus GNOME

Undoing Selection/Deselection

Thinking dentistA long time ago I mentioned top software design/usability bugs. It occurred to me yesterday that there is yet another common deficiency, which is the inability to undo selection and deselection of files, entries and the like. This should become a very fundamental functionality in my opinion.

How many times in the past did you use the CTRL or SHIFT keys to establish and highlight a collection of files? One wrong click and the entire selection is gone, leading to a habit whereby files are handled in subsets, i.e. in smaller batches. In all operating systems I have come across, a selection of files, for example, forms a newline-separated list of the files with their full path. All of this is stored in the clipboard, so implementation of an undo stack should be trivial and incur no efficiency penalty.

Retrieval statistics: 21 queries taking a total of 0.144 seconds • Please report low bandwidth using the feedback form
Original styles created by Ian Main (all acknowledgements) • PHP scripts and styles later modified by Roy Schestowitz • Help yourself to a GPL'd copy
|— Proudly powered by W o r d P r e s s — based on a heavily-hacked version 1.2.1 (Mingus) installation —|