____/ Jezsta Web Productions on Wednesday 04 July 2007 10:33 : \____
> "Roy Schestowitz" <newsgroups@xxxxxxxxxxxxxxx> wrote in message
> news:6938170.bW3VtJ75sL@xxxxxxxxxxxxxxxxxx
>> ____/ canadafred on Tuesday 03 July 2007 15:52 : \____
>>
>>> On Jul 2, 11:23 pm, Roy Schestowitz <newsgro...@xxxxxxxxxxxxxxx>
>>> wrote:
>>>
>>>> > As far as grammar is concerned, the search engines are evolving to
>>>> > employ the natural use of the language. Grammar has a role and will
>>>> > play an increasingly more vital role in determining content
>>>> > credibility and authenticity.
>>>>
>>>> I doubt it. The computer power needed to do this is enormous and it's a
>>>> hard
>>>> problem (if not impossible) to solve. Multiply this by the number of
>>>> pages
>>>> on the WWW.
>>>
>>> SEs use stemming, pluralization, synonyms etc. in factoring content
>>> relevance.
>>
>> True, but I don't think they descend to the level of grammar and
>> semantics.
>
> Why not, and why would you say the computer power has to be enormous? If MS
> Word can find grammar and spelling mistakes I don't feel it is to hard for a
> SE to come up with something. :-)
I'm sure that if spammers want to use some content that grabs traffic, they
could put together something that parses alright. Also, what about tabular
data or Shakespearean stuff? Will the results overall be improved? Maybe they
have already experimented with the idea and gave up. There are free grammar
checkers they could easily plug to count mistakes and then add to the equation
along with some weighting.
--
~~ Best of wishes
Roy S. Schestowitz | Those who can, Open-Source
http://Schestowitz.com | Free as in Free Beer | PGP-Key: 0x74572E8E
Cpu(s): 25.9%us, 4.5%sy, 0.9%ni, 64.3%id, 3.9%wa, 0.3%hi, 0.2%si, 0.0%st
http://iuron.com - semantic engine to gather information
|
|