Friday, January 6th, 2006, 1:42 pm
The Cost of Efficiency
AKING of efficient algorithms and use of more compact data representations have a cost in terms of complexity. They lead to greater coding and programming time, yet there is plenty to be gained.
Broad-scale example: writing of a good search algorithm makes it hard to understand, let alone algorithms that discriminate data. There is a reason why the Web was made available and practical throughout its decade-long extension.
One would say that the Web has been open. Its pertinent objects were also broken down rationally. It was also because not all pages and graphics are bitmaps, but instead the Web exploits wavelets that are based on mathematics or defined a palette and then use some conventions to refer to it.
Moreover, rather than poster-like Web sites, where everything is hard-coded, there emerged a language called (X)HTML which is concise, descriptive, and unambiguous. There are protocols which define how it should be rendered so a description — a code that is — makes it laborious to implement. Construction of pages becomes arguably easier and the end product much more concise, in terms of its size. Versatility (or flexibility) is yet another matter.
The take-home message: complexity in implementation or even specification takes its toll, but entails true benefits. ‘Lazy’ programming leads to inefficiency, whereas tactful complexity pays off in the long run.