Introduction About Site Map

XML
RSS 2 Feed RSS 2 Feed
Navigation

Main Page | Blog Index

Archive for the ‘Open Source’ Category

Eben Moglen Outlines the ‘Kill Switch’

Eben Moglen, the Free Software Foundation’s attorney, has just had an interview with The Register. Therein he explains his strategy for breaking any effectiveness of Novell’s Deal with Microsoft. Below lie a couple of snippets which were selectively extracted from the full interview.

“We’re going to make the deal not tenable and we urge Microsoft to back away as gracefully and as quickly as possible from a deal that won’t work.”

[...]

“If GPL 3 goes into effect with these terms in it, Novell will suddenly becomes a patent laundry; the minute Microsoft realizes the laundry is under construction it will withdraw.”

Open Source Journalism Compared to OS Software Development

Here is my first article in NewAssignment.net. The final version was polished and made more reader-friendly by David Cohn. Here is the initial essay (ramble-saturated draft as a matter of in fact) that was composed before further editing job.

On many levels, the process or collaboratively writing articles resembles the process of Open Source software development. Herein we investigate various aspects that demonstrate and exemplify the parallels between open-ended work on computer code, as well as on factual content which is controlled by a group.

1. Bug fixing in code equates to the process of peer review and fact checking in journalism

To use an example, Mozilla’s Bug-catching efforts are equivalent to fact checking. There are many similarities between these two processes. More eyeballs on the code (or articles) make bugs (or mistakes) shallow, to use Linus Torvalds’ famous statement.

2. Reuse and reproducibility of code versus exploration of content and public databases

With openness in mind, one is more interested in sharing. This leads to the idea of building upon prior or present work of others. That, as a matter of fact, is where the power of Open Source concepts and paradigms lies, particularly those which align with scientific motives (e.g. standing on the shoulders of giants to see further, to use Newton’s analogy). One of the strengths of Open Source-esque assemblage of information is that one can ‘massage’ content or code to better suit different purposes in different contexts.

3. Versioning, editions, fixes, patches, and corrections as spearate press releases

Open Source journalism opens the doors to, as well as encourages, versioning of content (revisions, changes, and rollbacks). Moreover, it can be passed among various minds, attracting more pairs of eyes which verify its validity. Consequently, material can be corrected before outrageous, inflammatory or errorenous information gets published. Once again, the argument revolves around peer moderation and review. Open Source is renowned for being a quick and reliable method for assessing work, thereby providing an almost-immediate quality assurance.

4. Expansion; no privatisation or ‘commoditisation’

Content which is open is available for extension since ownership becomes a ‘fluid’, so to speak, territory. Creative Commons licenses, for example, permit and accommodate derivative work with various defensive exceptions. It becomes a case where the “one-size-fits-all” myth is broken. Different authors and audiences seek accurate information with a different slant.

5. Code similarity, duplicability and linking

Popularisation and wealth of hyperlinks on the Web rids authors from this laborious need for dynamic references which are detailed, cluttered, and difficult to follow. Links obviate the need to repeat. With the vast size of the Web, however, concern also emerge with regards to reliability of information. On the other hand, the ability to follow links at ease makes it less necessary to reiterations that serve the reader the ‘leg work’. Similar information can be merged, inter/cross-linked and knowledge is thus forged to become more cohesive and centralised. Excess due to duplicability is the motivator to gathering information in a single place which—in turn—enables more eyes to view and correct mistakes. It is hardly surprising that collaboration and unification (much like standardisation) benefit all by improving the eventual outcome. These prevent waste, as well as reinventions and incarnations of the same wheel.

6. Monetised model — donate on investigations that appeal to you

Independence for the writer is analogous to the freedom of a freelance software developer. Financial interests can often be curtailed (albeit not necessarily so).

7. Recognition as motivator, money secondary factor

It is no secret that many reporters are overworked and underpaid. The reason for persistence, however, is often genuine interest, true personal involvement, and impact. The same can sometimes be said about software developers who seek to fulfil a goal, overcome perils, and address personal pet peeves. Satisfaction comes from a self-tailored product, be it an article or software. Eric Raymond calls this phenomenon “scratch-itching” software where each member of the development group addresses his/her own needed and desires. Ultimately, the requirements of the entire team provides the perfect software package which everyone is pleased with. Put differently, on average, its optimality corresponds to the involved group.

8. Audience judges and substantiates popularity, but quantity does not lead to higher cost of production

One of the key observation to make is that the cost of manufacturing an electronic code (text/information) are merely inexistent. They involve no physical raw material. Let us consider and set aside the exception: information is environment-friendly only if you neglect energy which only sometimes (e.g. random-access memory) persistently retains information and, in itself, requires combustion of material. A tiered Web (and the end of Net neutrality) could soon change this perception however.

Unlike the production of a paid of shoes, for example, replicating code (programs) is a money-independent process. It is only research and development that counts. Likewise, in electronic journalism, it is the effort of the group (e.g. time spent) that counts while no paper needs to be produced. Bandwidth limits aside, a successful story can reach a wide audience and received a lot of feedback (including corrections) without elevating any production/delivery costs.

9. Authorship/credit

It is a known fact that people’s own code (or product of composition) becomes their ‘baby’. With Open Source models, there remains an electronic trail which indicate who contributed which part of an article or program (e.g. SVN, CVS for version control). Thus, credit can be given to the rightful person or mistakes be traced back to the culprit — the one guilty of erroneous inclusions).

10. Editorial control, personal writing style/unification

Consistency is an important aspect of paper-based publications and code alike. It is important not to duplicate effort (e.g. similar/identical articles or multiple functions which achieve a similar goal). There needs to be supervision that relies on knowledge that is applied in a top-down direction.

In summary, there are several characteristics to Open Source software development which define and promote the concept of Open Source journalism. Key virtures that are sought by journalists can be backed by successful, established, and well-proven overlaps in software.

I have also put together a short bio for the site (mildly edited).

Bio: Roy Schestowitz is a Ph.D. Candidate in Medical Biophysics at Manchester University. While on the verge of completion he finds himself fascinated by endless possibilities which the World Wide Web offers; consequently, he often gets distracted and sidetracked. He advocates the use of Open Source technology in the public and private sector, as well as uses his background in computing to make personal contribution to the Free Software movement. In his spare time he engages in activities ranging from weights-related sports to easy-listening music.

Putting Code in Perspective

I used to envision myself coding for the rest. Of. My. Life. many times I thought I had it all figured out. And nevertheless, experience suggests that preferences, trends and prospects always change. The industry nowadays is dynamic, volatile, and ever-changing. Thus, predictions that are long-term are utterly silly.

I have neither stopped coding, nor began to neglect purely technical things, but I refuse to ‘reinvent the wheel’ so to speak. Apart from packaged (often closed-source) applications that are out there mimicking each other and achieving virtually the same thing, one can turn to Sourceforge and see that not many project code is being re-used, let alone used (downloaded and run). There is a certain exhaustion in the market due to saturation, as well as the ability to duplicate good software for free. That’s why I advocate Free software, albeit not at the same level of capacity that equates to activism per se.

Google on a computer screen

I content that companies which ignore Open Source will not survice. Let’s take Opera for example. Opera needs to open up in order to attract enthusiastic volunteers (for plugins & testing, among other contributions). Only then will it match Firefox. Internet Explorer is all about monopoly abuse, so the same rules don’t apply. It does, however, come to show that proprietary software evolves slowly and often plays catch-up.

Maintenance and support will soon exceed programming and development in terms of capacity. Systems administration and architecture are the next big wave. “Open Source” becomes more of a vernacular term whilst hordes of programmers will stampede toward that development paradigm. Wither your right to survive in your career? Why? Better evolve; sooner rather than later.

Prophetic Prediction in 2001: Open Source Monopoly

Crystal ball

CNET presented the following perspective just 5 years ago.

It’s not hard to understand why open-source software, such as the Linux operating system and the Apache HTTP server, is growing in popularity among corporate IT departments. When source code is open, any developer is free to read, redistribute and modify it. This leads to faster bug fixes, improved software and lower development costs.

This appears to be gradually becoming a reality. The ideas and principles are more openly accepted as valid.

Software Oligopoly and Impending Transition

Vista error message

I shall continue to argue that Microsoft software (and Windows primarily) is slow and too cumbersome to work with. It discourages high productivity levels. Might this explain why that company from Redmond has produced so little in the past 5 years? Let us discuss.

I am shocked to see a software behemoth with so many employees still struggling to ship products on time. I can recall that rusty O/S called Windows XP, which was released when I was a teenager. It’s amazing that Microsoft has achieved so little in the past half a decade. All it has been able to get is just another ‘Service Pack’, to be ready some time next year. This one has a different name and a new theme, Aero Glass (see above). It also bears a hefty price tag.

Linux users may like to handle complexity, but meanwhile it seems as though their codebase is far more maintainable than that of Microsoft. In case you have not followed some key events, 60% of the codebase of Windows must be rewritten as it’s an utter pain to extend.

Let us take a step further and discuss the issues of security, diversity, and competition. Windows was not built with security or multiple users (network thereof) in mind, so it is merely ‘patched’ to bridge that crucial gap. A one-man election might work with Windows-based Diebold machine. Windows is, after all, a single-user O/S with some ‘hacks’ that make it possible to be used by multiple users in a network that involves more than a single user (e.g. Internet). And it’s worrisome. This has led to cyberstorms and makes the Internet a less pleasant place than ever before. Patches take long to issue because, in a codebase with ‘hacks’, there are just too many dependencies to consider. There is poor modularity. This monolithic approach leads to flakiness and unpredictable behaviours.

Is diversity the answer? Is a staged migration to more mature and reliable platforms the path to secure computing? I have little or no doubt about it. But this will not be easy. It is only natural to assert that Microsoft is doing illegal stuff to stifle its competition. Such a software industry vermin deliberately restricts ‘diversification’. There’s no parity in the industry that outmuscles any competitive threat before it matures to match the behemoth. It strives towards a state of mono/oligopoly and the law offers no barriers as it’s being tweaked by lobbyists.

Competition, you argue? I see none, but luckily people begin to see this and respond accordingly. It’s a false sense of competition when a startup needs and depends on vendor X in order to develop a product to compete with vendor X. That’s what Microsoft does through operating systems, distribution channels, licensing, and programming languages. A stronghold on the market may soon be broken, at least in Europe. The remainder of the world is secretly/quietly migrating to Linux, albeit the scale of this is not being blown out of proportion using advertisements. There is no marketing department in a Free software initiative.

Bill Gates Came From an Open Source Background

Bill Gates
Bill Gates arrested in his younger days (photo in public domain)

From: 'Programmers at work', Microsoft Press, Redmond, WA [c1986]

Interviewer: “Is studying computer science the best way to prepare to be a programmer?”

Gates: “No, the best way to prepare is to write programs, and to study great programs that other people have written. In my case, I went to the garbage cans at the Computer Science Center and I fished out listings of their operating system.”

Not only does this confession put Gates in a position of hypricy, but this also explains Windows’ origins — the garbage cans.

Related item: Microsoft accused of software communism (totality)

The First Linux Distribution

ManLUG

The old ManLUG Web site

The Linux Distro Timeline is a nice little project which visualises connections between the different Linux distributions as a function of time (there is also a Linux mindmap). In any event, it led me to exploring Wikipedia, which supports the argument that the first GNU/Linux distribution was developed right here where I work. Below is a snippet, extracted from some prophetic words from Owen LeBlanc.

There are now two free Unix operating systems available for PCs: Linux and 386BSD.

Linux is the more mature system, now available in it fifth public test version, 0.95a. The system requires a 386 or above, with or without a co-processor, with a minimum of 2 Mbytes of memory, and with at least 4 Mbytes recommended. The source for the operating system requires about 0.5 Mbytes, and binaries currently available (about 250 commands) require 8 to 10 additional Mbytes, although, of course, you may delete unwanted bits, or add further programs. With swap space, this means a minimum of about 20 Mbytes of hard disk space. …

Retrieval statistics: 21 queries taking a total of 0.119 seconds • Please report low bandwidth using the feedback form
Original styles created by Ian Main (all acknowledgements) • PHP scripts and styles later modified by Roy Schestowitz • Help yourself to a GPL'd copy
|— Proudly powered by W o r d P r e s s — based on a heavily-hacked version 1.2.1 (Mingus) installation —|