Home Messages Index
[Date Prev][Date Next][Thread Prev][Thread Next]
Author IndexDate IndexThread Index

Re: Link management

  • Subject: Re: Link management
  • From: "Paul H" <nospam@xxxxxxxxxx>
  • Date: Thu, 12 Jan 2006 17:04:02 GMT
  • Newsgroups: alt.internet.search-engines
  • Organization: ntl Cablemodem News Service
  • References: <%vwvf.12123$W4.8915@newsfe4-gui.ntli.net> <dpm6me$ebr$1@godfrey.mcc.ac.uk> <k2rwf.64851$Cj5.26100@newsfe6-win.ntli.net> <dpu0q9$fp6$1@godfrey.mcc.ac.uk>
  • Xref: news.mcc.ac.uk alt.internet.search-engines:74702
"Roy Schestowitz" <newsgroups@xxxxxxxxxxxxxxx> wrote in message 
news:dpu0q9$fp6$1@xxxxxxxxxxxxxxxxxxxx
> __/ [Paul H] on Monday 09 January 2006 10:38 \__
>
>>
>> "Roy Schestowitz" <newsgroups@xxxxxxxxxxxxxxx> wrote in message
>> news:dpm6me$ebr$1@xxxxxxxxxxxxxxxxxxxx
>>> __/ [Paul H] on Friday 06 January 2006 16:02 \__
>>>
>>>> I have over 100 external links on my website, I need to check who of
>>>> these
>>>> links back to me. I need to do this regularly.
>>>>
>>>> Is there a software tool that does this?
>>>>
>>>> Thanks,
>>>>
>>>> Paul
>>>
>>>
>>> * Technorati.com enables you to do this almost in real time. Output is 
>>> in
>>> the
>>> form of Web pages or RSS feeds
>>
>>
>> Couldn't get my head round that. On the about about page it
>> says.."Technorati is the authority on what's going on in the world of
>> weblogs." ??
>
>
> If the world of Weblogs is all about /links/, then yes. Tagging is also
> something that they dominate. They are best bar none in that area. Since a
> new blog is created every 1 second nowadays, their capacity is depleted
> though and they can't cope with more obscure sites that link heavily, 
> often
> for SEO purposes.
>
>
>>> * Try 'link:<your_site_address>' in Yahoo search or Google search. You 
>>> can
>>> pull results in the form of RSS feeds from both
>>>
>>> * Use one of a variety of meta search engines in http://gada.be/ to keep
>>> track of items that identify your site. Many links are included and
>>> delivered in RSS form.
>>>
>>> * Look at your referrals logs and try to see what comes up. valuable 
>>> links
>>> tend to lead actual visitors (traffic) to your site.
>>>
>>> I mentioned RSS form quite often because you sought a software tool. Use
>>> an
>>> RSS reader (e.g. RSSOwl, Thunderbird or Web-based Feedlounge, Google
>>> Reader...). This means that you will have a comforable environment for
>>> keeping track of this any time, anywhere. You can also aggregate results
>>> form a variety of distinct sources.
>>>
>>> Hope this helps,
>>>
>>> Roy
>>>
>>> --
>>> Roy S. Schestowitz      |    make install -not war
>>> http://Schestowitz.com  |    SuSE Linux     |     PGP-Key: 0x74572E8E
>>>  4:35pm  up 26 days 23:46,  12 users,  load average: 0.63, 0.43, 0.20
>>>      http://iuron.com - next generation of search paradigms
>>
>>
>>
>> After looking at some of the solutions I now know *exactly* what I need. 
>> I
>> currently have a links page with hundred of links on it that has never 
>> been
>> maintained, links have been added and never checked for a reciprocal 
>> link.
>> I want to be able to scan every domain that is linked to from my links 
>> page
>> and check if they link back to me.
>>
>> So, for example, if I have a link to http://www.bananas.com on my site, I
>> want to scan the entire bananans.com website for a reciprocal link. Is 
>> this
>> possible?
>>
>> Thanks Roy,
>>
>> Paul
>
>
> There are commercial tools for doing that if I remember correctly. 
> However,
> how deep need you go in this voyage for that reciprocal link? People tend 
> to
> move links around, if not remove them altogether. You don't want to break
> 'link pacts' in vain.
>
> Will you be willing to crawl the entire site and, if so, how would your 
> 'link
> partner' feel about this? How often will you run such link checks? In 
> Linux,
> one could do this rather simply, without any shrink-wrapped bloatware.
> Firstly, to check that all outgoing links are 'alive', pass your links 
> page
> to:
>
> http://validator.w3.org/checklink
>
> and look at the summary. Then, what you need to do is descend into any 
> such
> site -- the external links, that is. One tool (among more) for the job is
> wget. You could get just the referred page downloaded or even fetch the
> entire site by following links recursively.
>
> You could then run a scanner like fgrep or grep on the files (similar
> front-end tools are available for Windows, albeit they cost money). You
> should hen attempt to find your domain name anywhere in the site, which is
> now mirrored locally. Something like:
>
> fgrep -R "mysite.com" *
>
> The scanner would tell you where the links reside, if anywhere. You can 
> apply
> this in batch mode, automatically going through your full list of links 
> and
> then schedule it to become a cyclic job (e.g. UNIX cron).
>
> Hope it helps,
>
> Roy
>
> -- 
> Roy S. Schestowitz
> http://Schestowitz.com  |    SuSE Linux     |     PGP-Key: 0x74572E8E
>  3:35pm  up 29 days 22:46,  14 users,  load average: 0.79, 0.95, 0.80
>      http://iuron.com - next generation of search paradigms


Roy,

Thanks again for your time and wisdom. I have opted to go with Bloatware ;O)

I have been using the SEO Studio demo (http://www.trendmx.com/) and it 
appears to be able to manage links pretty well. However, taking this route 
does mean I will trash all my existing links and start from scratch.

The alternative appears to be going through them all manually or taking a 
crash course in Linux or Perl, none of which appeals.

Regards,

Paul 



[Date Prev][Date Next][Thread Prev][Thread Next]
Author IndexDate IndexThread Index