Introduction About Site Map

XML
RSS 2 Feed RSS 2 Feed
Navigation

Main Page | Blog Index

Thursday, February 16th, 2006, 6:14 am

Search Engines and Benchmark Subjectivity

SEARCHING of the Web is no exact science. If it was, its exhaustive exploration would be infeasible. The more formidable search engines amass information from million of Web sites, each containing huge lumps of information — both textual and media. That information, in turn, could be interpreted and/or indexed in a variety of different forms. Rarely is the content truly understood, which is my personal motivating for knowledge engines.

Mathematics and physics could be argued to be inexact sciences as well, at least when a variety of man-made, non-fundamental fields are introduced. Think of computer science, for example. Its fundamentals assimilate to this complex problem, which is searching the World Wide Web. It is associated with ad-hoc solutions. Computational theories which relate to Turing machines are not tractable enough to make a most correct and efficient algorithm ever crop up and stand out.

Don Knuth has written his popular series of books on the issues of correctness and efficiency in common algorithms. It proves an elegant reference to many computer science practitioners. Problems which are simple, such as element or number sorting, are still handled differently by different algorithms and their efficiency is dependent upon the architecture involved, the scale of the problem, and its nature. Search algorithms likewise, which is why they should be engineered differently depending on a number of key factors. Hence, judgement of the quality of search engines cannot be done objectively, but only ever be estimated using test cases and artificial scoring schemes.

Search buttonStill, everyone wants to discover the perfect recipe to outperforming Google. Others try to reverse-engineer their algorithms and cheat (fame and riches owing to ‘Google juice‘ that is channelled to one’s site/s). Many of us continue favour and recommend Google, which brings the largest number of referrals to most sites in existence. There is a danger here though. Large search engines are the main target for deceit and they are easily confused by spam inasmuch as they are inclined to pick up rare and valuable content.

Quality of search is probably in the mind of the searcher and the result of hearsay — somewhat of a ‘cattle effect’. Even engines that spit out cruft might be defended unconditionally by their innocent users. This may lead the competition to forfeiting the battle and invest fewer resources (e.g. datacentres) in the attempt to catch up. Phrases like “Google it” do not help either as they promote a search monoculture at best.

Related item: Search Engines and Biased Results

Comments are closed.

Back to top

Retrieval statistics: 21 queries taking a total of 0.131 seconds • Please report low bandwidth using the feedback form
Original styles created by Ian Main (all acknowledgements) • PHP scripts and styles later modified by Roy Schestowitz • Help yourself to a GPL'd copy
|— Proudly powered by W o r d P r e s s — based on a heavily-hacked version 1.2.1 (Mingus) installation —|