Introduction About Site Map

XML
RSS 2 Feed RSS 2 Feed
Navigation

Main Page | Blog Index

Archive for the ‘SEO’ Category

Alexa Ranks – Only Make Belief

Alexa ranks can be fun. But can they ever be trusted?

What does a high Alexa rating mean to a web master? It shouldn’t mean that much as it’s not accurate. Alexa is a website that tracks a website’s traffic history, and gives a ranking based upon the number of visitors. However the fact that it requires a tool bar to work flaws it in many ways.

[...]

Apparently the folks at Alexa have never heard of any other browser besides Internet Explorer and FireFox. This seems quite unprofessional coming from a company owned by Amazon.com.

My main site peaked at ~17,000th for Alexa (with Netscraft rank currently at 18,608th for schestowitz.com). In the latter case, however, the figures are grossly biased because I have the toolbar installed. Ranks very much depend on the audience the site attracts. System administrators , for example, fancy the Netscraft toolbar. Its primary service addresses a niche.

Alexa traffic ranks prove to be a real problem (as well as a perpetual pain) to Webmasters. This remains the only number which can conveniently be assigned to a Web site. It is a silly label that should be disregarded, but the average user does not know this. Luckily, not every average user will have such ranks displayed. Alexa as a comparator is a misleading assessor. Even top sites cannot be compared, unless one judges by orders of magnitude (and takes these with a grain of salt). In fact, PageRank and the likes of it weigh more factors other than traffic, so they ought to surpass Alexa in terms of validity.

As a timely rant, I was temporarily able to influence Alexa rank with a local installation of the A9 toolbar, but then Microsoft took over A9′s operations and forced them to shut down some competing services, the toolbar included. Yet another example of acquisitions or mergers that are practically death knells (and a penalty to Mac/Linux/BSD users in this case). That is just why I took it personally.

AlexRank
SearchStatus in action

Related item: Firefox Toolbars

Divisive Web

InternetAccording to an article that I recently read, the Internet could one day be broken down into separate networks that are isolated and selectively dispersed around the world. This means that the global nature of the Web, as well as the wealth of information, would cease to exist. Moreover, this heralds that final goodbye to a state where little or no censorship barriers can prevail. This changes one’s perspective entirely.

This worrisome move is entirely different from the issue of Net neutrality, which in itself separates the Web into multiple tiers. It is also reminiscent of rumours about ‘Googlenet’, where one submits a site to a dark privatised Web that gets indexed and closely monitored (obviating the need to crawl remote servers and use pings for distant notification).

In the long term, whether this is totally disastrous or not remains to be seen. Consider, for instance, the peculiar extension of resources that are made publicly available. Let’s look a look at the way that the Web has evolved in recent years. Only a tiny crosssection of the ‘visible’ Web involves content spammers (or scrapers), where visibility is grossly defined by search engines (internal sites and intranets aside). However, in reality, the content that exists on the Web–that which is deliverable and which is spam–can actually be a majority (spammers spawn colossal colonies of junk and dummy content). This leads to (or involves) blogalanches, ‘poisoning’ of the index/cache, and it’s subverting search results in the process. All this leads to chaos as search engines diverge from the correct search results and deliver something less meaningful. In the process of struggling for good spots (or visibility) in search engines, spam rises and leads to attacks of various sort. Temptation leads to vandalism, which leads to further maintenance. The Web no longer seems like an appealing place to be. But can division of the Web help? I very much doubt it. It’s all about authorities controlling information. Brainwash is the means for making others think alike, comply, and even be submissive.

WordPress Domain Hosting

IT has been argued and nearly publicly announced that WordPress.com is headed towards a get-your-own-space program. I think this would be an excellent idea. Essentially, a blog that runs on WordPress.com can be accessed transparently from a personal domain rather than a subdomain on WordPress.com.

Interesting thoughts spring to mind. One can get a wordpress.org blog hosted by a third-party (through a manual installation or using a one-click-away script). Alternatively, anyone could just start things on a small scale with WordPress.com, then growing big(ger) with a personalised, top-level domain. While I’m not sure how search engines will deal with redirections or URL changes (this could get tricky), it could be done properly by sending HTTP header with status code 301. I heard success stories, as well as ‘Googlejuice’ disasters. But people’s bookmarks should not be an issue.

Chiroweb.com, for example, has been doing essentially the same thing, namely letting you have your own domain hosted as a subsite on a root site, which is at the same time accessible through your won domain. Page composition (CMS front end), on the other hand, is, as expected, restricted by the service, so there is limited freedom and scope for manoeuvre, development, and extension. This can nonetheless be circumvented by changing hosts and installing an alternative (temporary site mirror) manually. It should be possible with WordPress.org, but probably not with Chiroweb, whose templates are proprietary/licensed (example below).

Davie Chiropractic

That’s my relative in Florida!

In Fight Against Spammers, Google Drops Pages

Google Cookie

SEVERAL weeks ago I discussed some of the problems which Google are having with their cache. The links therein paint a full picture that comprises many speculations. Genuine pages from various Web sites across the Web are being dropped. After a while, some more evidence has been reaching the surface, e.g.

Something really weird happened when I had the password problem last week — I completely disappeared from Google.

As discussions, which are oddly enough being deleted (Google may be trying to hide the existence and scale of the problem) indicate, something quite major is happening ‘behind the scenes’.I submitted a relevant link to Digg. As the thread indicates, Google is indexing billions of spammy pages and is apparently dropping and neglecting genuine Web sites in the process. It is not deliberate on Google’s part, but the outcome are poorer search results and reduced traffic, if you are among the Webmasters affected.

Google’s Big Daddy Update and Cache Flush

Google on a computer screen

MANY Webmasters may have already noticed (or been informed) that Google’s greatest and latest, the 64-bit “Big Daddy” datacentre, had experienced a major error. The Web developers community is humming over the consequnces whilst very few clues are selectively being delivered by Google spokesmen (and spokeswomen).

More information on the issue of page dropping, “Big Daddy” anomalies and general impact on inbound links:

The public is kindly requested not to panic. Google results pages may have degraded (including invalid, irrelevant and broken links), but all in all, this degradation is not radical. Google remains best bar none in terms of quality search results.

Assessment of Competition in Search Results

Google on a computer screen

Would you like to make search engine tracking more efficient? If so, read on.

AMOMG some nice Web-based tools for SEO, there exists a Google PageRank comparison tool. There are more such tools on the same site. They tend to automate probings that are intended for egocentric evaluations of site positions.

It is definitely worth a try if you are a Webmaster who seeks more attention (referrals) from search engines.

Google’s Perception of rel=’nofollow’

Iron links

Links can lose their value and
get rusty, even with Google

I was innocently browsing the Internet this morning. By serendipity, I then arrived at a page where Google boast their contributions to Open Source software through funding (Summer of Code). It is only one example among others. But then, upon immediate inspection, merely all links turned out to be rel="nofollow"ed. SearchStatus made it evident by highlighting those links with red shades.

I have always adamantly believed that the purpose of this new class for links was different. I thought it was introduced in order to prevent and deter spam, among other things such as accommodation for microsformats (e.g. XFN). Here is the snag: If Google themselves are using rel="nofollow" to obstruct dynamicity into relevant, on-topic links, why should anyone else be hesitant to do so? rel="nofollow", a concept that was put in place by Google, is confirmed to have become something that can be misused. Its use has gone beyond the so-called ‘link condom’ (for spam) utility. Otherwise, Google demonstrate hypocrisy herein.

Retrieval statistics: 21 queries taking a total of 0.118 seconds • Please report low bandwidth using the feedback form
Original styles created by Ian Main (all acknowledgements) • PHP scripts and styles later modified by Roy Schestowitz • Help yourself to a GPL'd copy
|— Proudly powered by W o r d P r e s s — based on a heavily-hacked version 1.2.1 (Mingus) installation —|