__/ [wd] on Sunday 23 October 2005 06:57 \__
> On Sun, 23 Oct 2005 06:24:50 +0100, Roy Schestowitz wrote:
>
>> __/ [wd] on Sunday 23 October 2005 05:55 \__
>>
>>> I'm seeing old sites that have been around for many years that are on
>>> the same exact IP address and they are obviously just different domain
>>> names with nearly the same content except optimized for location.
>>
>>
>> I usually think of Palm and other large companies when that observation
>> gets discussed. True, they are optimised for location, but there is plenty
>> of repetition too.
>>
>> I pointed out Palm because often I end up landing in a page that does not
>> serve me with what I have sought. Instead, there can be a single site with
>> pages that are delivered depending on the country where the visitor re-
>> sides.
>>
>> These site regionalised 'mirrors' are very much like porting/forking of an
>> application rather than extension of the trunk. Imagine yourself Firefox
>> being sub-divided to "Cool Surfer Edition", "Asian edition", "Censored
>> Edition" and so forth... need we speak of Windows Vista which will come in
>> 7 editions?!?!?! Windows inheriting that Linux terribly messy 'model' of
>> distributions?
>
> 7 editions of VISTA? I can picture it already...
>
> * demo (free Enterprise edition for 30 days)
> * quickstart edition ($69, includes paint and notepad version 12)
> * Student edition ($119, includes Wordpad 12, and $20 off MS anti-spyware)
> * Home edition ($169, includes minesweeper 12 and two new card games)
> * Professional edition ($249, allows networking of up to 3 computers)
> * Developers edition ($599, includes Monad)
> * Enterprise edition ($1199, for small business)
Read the following:
* http://www.pcmag.com/article2/0,1895,1858101,00.asp
He is being quite funny about it too.
>>> I'm not even sure if Google can automatically detect
>>> hidden text very well because I see it everywhere. It is a bit
>>> frustrating when you are trying to do things legitimately...
>>
>> Nobody can detect mirrors. I once spoke to a professor about our
>> 'almighty' plagiarism detection system and he admitted it was more of a
>> scare factor. You can /suspect/ a mirror, but rarely have any certainty.
>> If you can't tell which the original source is, as in the case with the
>> WWW, you cannot penalise (safely) either.
>
> Are you saying that in the case of duplicate sites Google can't tell which
> one is the original?
Not confidently. Nobody could. Some people quote other sites without
crediting with a link. There are many rip-offs I have come across and they
are merely undetectable. A good item can quickly have its SERP's snatched.
For example, Google "french women don't get fat" and see what I'm referring
to.
> I don't think Google can detect a lot of things. I've seen some really
> strange SEO out there. You would think Google would penalize a site for
> having a single period or underscore as the link text 15 times on a page...
More worrying is the fact that as a Webmaster, there are many secondary
factors to be aware of. Try telling a Webmaster that he/she must never use
underscores, that he/she must percolate 'energy' wisely, etc.
It's an unfair game. And guess what? It's getting worse. Spam is everywhere:
site traffic, referrers, E-mail, content, links. Just name it, spam is there
to stay, if not expand too.
What's worse is that immature child-like game where people say "they did it
first". This relates to the subject line as a matter of fact.
* "These competitors of mine buy links, so why can't I?"
* "E-mail spam is everywhere, so why not add our contribution?"
* ...
Roy
--
Roy S. Schestowitz | WARNING: /dev/null running out of space
http://Schestowitz.com | SuSE Linux | PGP-Key: 74572E8E
7:50am up 58 days 17:59, 9 users, load average: 0.33, 0.65, 0.65
http://iuron.com - next generation of search paradigms
|
|