John Bokma wrote:
> Roy Schestowitz wrote:
>> CarolW. wrote:
>>> One suggestion: Could put the other pages into a subdirectory folder
>>> then disallow the spiders from those folders.
>> This can be a big overhaul in the absence of _relative_ links.
> Just a search & replace :-D
In theory, yes. How about Web sites with thousands of pages? You then need
to use an fgrep-like tool which scans a large group of files and do the
work. It's not easy.
Not to mention that there is no guarantee that text (as opposed to tags)
will remain unchanged... with large sites it's difficult to test and
Anyway, that's just the con, which one has to be aware of...