Introduction About Site Map

XML
RSS 2 Feed RSS 2 Feed
Navigation

Main Page | Blog Index

Archive for July, 2011

Vacation Until August

AT SOME point once a year, usually in July, I take a long break from the computer and the Internet (that’s my main machine at the top). In the coming week and a half I will be travelling outdoors, meeting friends, and working inside the lab where the Internet connection is not crucial (in fact, it is providing distractions in many ways).

I will be back in August and until then I will go online only to remove spam and ensure everything is up and running.

TechBytes Episode 56: GNU/Linux and Mac OS X Discussion by Michael Glasser, Roy Schestowitz, and ‘Goblin’

TechBytes

Direct download as Ogg (1:55:44, 25.7 MB) | High-quality MP3 (42.2 MB) | Low-quality MP3 (13.2 MB)

Summary: Michael Glasser, Tim, and Roy have a discussion about GNU/Linux and Mac OS X

TODAY’s show touches on a few bits of news but mostly focuses on comparisons between Apple’s operating systems and Linux-based operating systems. The guest in this episode is the Prescott Computer Guy, Michael Glasser, who advocates Apple products.

Except for the theme song at the end, the show include the self-named track by Takashi Kamide and “When You Are Near” by Warren Hood. We hope you will join us for future shows and consider subscribing to the show via the RSS feed. You can also visit our archives for past shows. If you have an Identi.ca account, consider subscribing to TechBytes in order to keep up to date.

As embedded (HTML5):

(more…)

In Defence of Publication Reform

At various stages throughout my career (I am 29 now), myself and others pondered starting a blog about Open Access, Open Data or open research (opening one’s lab, including data, methods, etc.), but since I already spend more time than I can afford advocating GNU/Linux, I ended up spending a lot of my energy fighting against software patents. This is one aspect among several involving the sharing and transparency of the sciences. There is of course also a dimension associated with copyrights and despite its importance I no longer have time to pursue the issue; many people already do so anyway. To put the core argument succinctly, nowadays when we have the Internet and we have a presentation layer such as the World Wide Web (for exchange of inter-connected rich media), we no longer depend on many analogue technologies and the notion of scarcity must cope and catch up with what’s available. To expect people to publish their findings only in paper form with pagination is to pretend that we are locked into legacy, which we should not. Computer Vision is considerably limited by paper. There are much better and faster ways of getting messages across, especially in this age of information overload. Moreover, travelling for presentation to a crowd (conference) is less necessary now that many people’s Internet connections permit video streaming at a good resolution. Romanticising over the nostalgia which is 20th century-esque research is no longer beneficial as it leads to inheriting inherent limitations. I was reminded of this in an IRC conversation last night. It was agreed upon that the theory about publishers exploiting academics to write, review and even edit entire publications for the publishers to profit from at the expense of those academics is something that needs to be stopped. Many academics these days have their own homepages and sometimes their blogs too. They can publish a lot of material there and let the quality/accuracy be determined by citation, e.g. something like the PageRank system which merely inherits the ideas of algorithms before its time (take away Google’s/Stanford’s patent while at it). The issue of course is that people cannot reference papers by Internet addresses, at least not by conventional means. The idea that papers should be accessible through libraries though is outdated as libraries too are going somewhat extinct and the speed of working there is inferior. Other than self pride and honour from peers, what incentive is there really to being heavily involved in the publication industry which benefits publishers and offers writers not so much exposure anymore? Fewer people seem to be searching journals; they use external search engines or Wikipedia instead (it’s multi-lingual). The debate becomes ever more relevant now that Aaron Schwartz is being hounded for just doing his job.

Eventually, prices will converge somewhere around zero not because academic work is worthless but because people’s goal is to share and disseminate their ideas, not to serve someone else’s paywall. The real worth of research is derived from its reach, not its scarcity.

GNU/Linux Saves Old PCs, Not Just Money

It’s not e-waste until the hardware is dead

PCB

A LOT of PCs that were perfectly fine 5 years ago were made obsolete not by dysfunctional hardware but by resource-greedy monopolistic and monolithic operating systems such as Windows Vista. This was bound to be a blessing to GNU/Linux, said one member of the Manchester LUG around 2006 when Vista was being finalised. He was right. Several years down the line, machines which were back then acceptable cannot possibly run the latest Windows, leaving GNU/Linux to fill the gap with its many distributions (suiting old and new).

Some machines, such as this one, come with Puppy Linux by default (I’ve just bought one to serve as a services monitor). There are other such machines that typically run GNU/Linux.

So yes, I’ve been checking about an extra unit to run a third monitor on. What strikes me as important is that the only two sites that I found so far [1, 2] both indicate that OpenOffice.org is a hot item even on Windows. With ODF and all, how would Microsoft feel? And to what extent will GNU/Linux benefit?

Network Uncertainty Problem

Cables

What happens when the network does not function or functions very poorly? What happens when the Internet is altogether down or goes on and off almost indefinitely/infinitely? The problem need not apply to the Internet connection as a whole; it might for example be a DNS server, a particular Web site which is necessary to access, a server at work, or even a message/call which is anticipated. When there is no estimated time for service restoration one can just assume that normality will resume shortly. Only in retrospect can one decide that it would have been better to just walk away and do something else (even offline) until the problems are resolved for good.

Before going any further, it is worth naming the motivation for this post. Yes, it’s BT again. All morning so far their service has been awful; sometimes the connection goes down altogether, requiring reconnection to SSH servers (which takes a lot of time). It also leads to gaps in IRC logging, makes Web surfing a real nightmare, and so on. Connection persistence is crucial here. The rest of the time there seems to be a capacity issue, so any bandwidth-heavy process is out of the question. To be fair to BT, for the past couple of months their service has been perfect, after a rather rocky first month (weekly unscheduled hangups) and many problems with setup (explained in length in this blog).

In any event, the point worth making is that in many circumstances in life there arises a dilemma where one must decide whether to walk away from the temporary problems early on (when they first occur) or wait patiently for a while and if so, how long for. The correct thing to do — if there is at all such a thing as “correct” in this context — very much depends on the knowns and unknowns; it may also rely on communication with people who are more familiar with the problems encountered (e.g. those in the back rooms). The network of people can better assess whether to plan for a contingency offline or just carry on waiting. This is why it’s a sort of network uncertainty problem. There is never a solid solution to it and the hours lost due to downtime can only ever be minimised by becoming better informed and also preparing alternative activities to pass time (I escaped the bad connection to write this post from the City Centre).

Google Targets People

Google Plus (G+) and Google Profiles look very similar and almost identical. Is Google merging them? It seems possible. It actually goes deeper than this. Google has become a major content network and not just a gateway to information. Not anymore anyway. Will Google use people’s personal data to push ads? Well, it’s already doing this. The transaction is hosting in exchange for changing one’s perception on behalf of advertisers, but the real problem is the proprietary nature of G+).

“As of 07/01/2011,” say a lot of older profile pages, “Groups profiles have been disabled. Your profile information will be available for export from your profile until 11/01/2011. This change will not affect the nicknames you might have chosen for yourself to participate in groups. It will only affect the custom profiles fields, such as your photo, location, and occupation. More details are available here.” The cited page says: “Beginning July 1, 2011, you will not be able to change your Google Groups profiles. You will be able to download the contents of your profile until November 1, 2011, at which time your Google Groups profile will be deleted.” Just deleted, eh? Unless Google Groups is shutting down, it seems possible that a migration route to G+ might be offered with an ultimatum of sorts.

The date is very close to that of G+. Might it be the reason for change? Groups profiles were ripe to abuse by trolls who abuse the scoring system (Google allows the same user vote an infinite number of times and thus game the system). The key question is, will Google try to convert many users to people with a G+ account, without going through conventional routes? Other than privacy invasion, there is not much to be gained from such networks. The way Google treats personal information matters because Google is… well, Google.

TechBytes Episode 55: Ubuntu Not in News, Google All Over the News

TechBytes

Direct download as Ogg (1:12:17, 16.0 MB) | High-quality MP3 (26.8 MB) | Low-quality MP3 (8.3 MB)

Summary: Tim and Roy do the first show where an outline of topics was prepared in advance by both sides

Tim and Roy speak about a variety of subjects ranging from Nortel’s patents, Google’s amazing Android growth, and Google Plus to Ubuntu decline in the news, Sabayon Linux 6, Mozilla versions, and KDE 4.7 (RC). Roy mentions his Argos/Archos revelations from yesterday while Tim brings up a Microsoft rumour and ACS:Law news.

The show features “Love Is A Dirty Word” by Jason Collett, “Luxury” by Tigersapien, and “Divide & Conquer” by Vandaveer. We hope you will join us for future shows and consider subscribing to the show via the RSS feed. You can also visit our archives for past shows. If you have an Identi.ca account, consider subscribing to TechBytes in order to keep up to date.

As embedded (HTML5):

(more…)

Retrieval statistics: 18 queries taking a total of 0.183 seconds • Please report low bandwidth using the feedback form
Original styles created by Ian Main (all acknowledgements) • PHP scripts and styles later modified by Roy Schestowitz • Help yourself to a GPL'd copy
|— Proudly powered by W o r d P r e s s — based on a heavily-hacked version 1.2.1 (Mingus) installation —|