-----BEGIN PGP SIGNED MESSAGE-----
On 30/08/05 20:48, you wrote:
> if this gets taken off-list, I'd be interested in discussing the topic
> further. Lord knows I need to work on my pagerank!
There is always room for improvement...for everyone, that is. WordPress
is actually quite cunning when it bogs down to SEO, but there are
various things I am unhappy about and that's why I still work on my
hacked WP 1.2 installation, which seems to attract good levels of
attention. I am also satisfied with Nuke and MHonArc, for example. They
all circulate PageRank rather well, but there is an issue when the
number of items gets inflated, resulting in duplicates and dilution.
I recently found out that I have small RSS objects (~400 in number)
indexed in Google. Needless to say, that is not a good thing. I do
understand that nofollow was not supported when 1.2 came out, but maybe
it's high time we adapt. Slashdot definitely did, for instance. They
only honour their own links. Why? Because they can. I don't do that
though. I refuse to noffolow people who bother to comment on my items
and follow my blog.
Then again, on the other hand, I think that Matt will be reluctant to
use nofollow for SEO as some people whine about its misuse. If it were
for me, nofollow would have never been implemented, but all is history
now. Since the darn thing was introduced by Google et al., one has
'secret', code-level factors to consider. If you get the latest version
of SeacrhStatus (1.9), then you can get nofollow'ed links to be
highlighted in red. It's essential in certain circumstances. Before I
got it installed, I had to "View Selection Source", which can be tedious.
> I was actually one of the first (that I knew of at the time) to make
> post titles an h1 on single pages, and integrate into meta fields as
> well. Even now there's lots of default themes that have posts as h2's
> everywhere... The real issues with SEO is that a lot of it is
> experimental, or cause-and-effect driven, or otherwise exploratory.
True. Many people experiment with it and report back to the SEO
newsgroup that I participate in, namely alt.www.search-engines. Nothing
grey-hat at all...
Speaking of H1 and H2, SearchStatus has a new feature which allows you
to check keyword density.
> There are some things we can infer about how the engines work, and a
> little that we know from either published accounts or direct
> cause-and-effect.... but so much is still mixed up in complex algorithms
> and computations inside the core engines.
> I've also wondered what other metadata is scanned and used for not only
> pageranking, but to 'force' indexing of pages that aren't getting
> indexed (i.e., adding link statements into header pointing to particular
> articles you want indexed).
I do not know much about the impact of metadata. I know that Google does
not always care much for metadata because it open the door to spamming.
Consider joining the forum I mentioned above.
> Anyhoo, be interested to chat further -- and hear more about particular
> links/content 'absorbing energy' from ranking...
Google no longer have a 100(ish) limit on the number of links in a page.
That's why the clutter in the menu (not the blogroll) can be shifted to
form full archives (with excerpts in my case), where much of the
'energy' can be passed with good anchor text.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.2-rc1-SuSE (GNU/Linux)
-----END PGP SIGNATURE-----