__/ [John Bokma] on Friday 28 October 2005 14:33 \__
> Roy Schestowitz <newsgroups@xxxxxxxxxxxxxxx> wrote:
>> first things to crop in my mind. You see, search engine algorithms are
>> not mission-critical, so they are likely to be poorly tested.
> Are you serious?
Actually, yes. Let's think about it...
Search engine engineers write code which will analyse millions of sites. They
can also then embed some junk code in the trunk, for whatever reason.
When the refined algorithm is finally ready for 'prime time' (e.g. Bourbon),
would it make much difference if debugging information was included in
compilation and resulted in a 1% slowdown? Would it have just a slight
affect on the performance or will it unleash the thunder of death upon the
In search engines, there are no right and wrong answers. There are many
pointers and their ordering (relevance) is a 'fluffy' art. It doesn't make
much difference if one domain among 80 million gets 8,000 links. It's
peanuts. It's affordable. There are bigger issues to address, but nontheless
such mistakes give a bad name to the SE and are embarrassing. They should be
high enough up the agenda.
 This makes you wonder if there are 'test set' Web sites that SE's are
using to test their spiders on. Under such circumstance, there is unfair or
unbalanced treatment of the World Wide Web.
Roy S. Schestowitz | Open the Gate$ to Hell
http://Schestowitz.com | SuSE Linux | PGP-Key: 74572E8E
2:30pm up 63 days 23:45, 5 users, load average: 0.11, 0.06, 0.01
http://iuron.com - next generation of search paradigms