Introduction About Site Map

XML
RSS 2 Feed RSS 2 Feed
Navigation

Main Page | Blog Index

Archive for the ‘Linux’ Category

OpenShot: Good for Novice Users

656px-Screenshot_of_OpenShot

OpenShot is a fantastic video editor for those — who like myself — are not video editing experts and cannot afford to spend hours just learning how to use a video editing program. OpenShot has all the basic functionality and it is coupled with the power of Blender to help achieve rather pretty effects and transitions. Having tried numerous other video editors for GNU/Linux and wasted many hours just trying to learn them (or failing to find some missing functionality), I can warmly recommend OpenShot.

QuinStreet Keeps Vital GNU/Linux Sites Offline. Help Change This.

What the world needs is preservation, not de facto censorship

LinuxDevices

Summary: Please let QuinStreet understand why it should let DesktopLinux and LinuxDevices articles return to the World Wide Web

A company known as QuinStreet bought DesktopLinux and LinuxDevices almost exactly a year and a half ago, along with Ziff Davis Enterprise. As the announcement from QuinStreet put it, “QuinStreet will benefit from new and expanded relationships with some of the largest clients in the B2B technology vertical, as well as an impressive group of additional editorial professionals and expert writers.”

QuinStreet logoBut what about all the valuable news stories? Not too long ago DesktopLinux and LinuxDevices went dark. The domains were left to rot and the articles accumulated there for over a decade became inaccessible, essentially deleted from the Web. It has come to our attention, after some inquiries with relevant individuals, that people who contributed to DesktopLinux and LinuxDevices — including the founders — do wish for the content to return online of for the copyrights or the articles to be changed — explicitly or implicitly — such that all the articles can be brought back to the Web by those to whom DesktopLinux and LinuxDevices were important resources or a matter of personal contribution.

DesktopLinux and LinuxDevices have published many thousands of high-quality articles detailing the history of GNU/Linux on desktops and in the device space, including the earlier days of Android. To let history be purged when QuinStreet has no financial incentive to do so is just counter-productive. Please write to CustomerService@QuinStreet.com to suggest that they relinquish the copyrights on articles or relicense so as to allow reposting of the articles online. This oughtn’t be complicated. This can be achieved by a single E-mail to the right people. If the company cannot provide a copy of the database (the ideal solution), then the articles can be pulled from the Web Archive and revived one by one (unless the process gets scripted). There are several people, including ourselves, who are eager to bring back exposure to articles that took many thousands of hours of work to produce. General enquiries and polite appeals can also be sent by mail to the following address:

950 Tower Lane, 6th Floor
Foster City, CA 94404
Tel: (650) 578.7700
Fax: (650) 350.1423

Help us restore two of the journals of record of the GNU/Linux community. Ask QuinStreet to collaborate on these efforts and assure the sustenance of its reputation this way. Right now the only barrier standing between the articles being online and those who want to put them online is lack of permission from QuinStreet. If QuinStreet does not want or need the articles, then why leave them offline in some dark space/room (or just on a backup tape)? Together we can promote dissemination of reliable historical information and also respect the work of many passionate people. Please let QuinStreet know how much DesktopLinux and LinuxDevices mean to you. Together we can accomplish this. Please be polite.

Google Glass: Wearable Surveillance

1411835_the_live_food_

OTHER than the fact that Google Glass is Linux-powered and partly Free/Open Source, I have never had interest in Google Glass. The fact that it is hackable — in the sense one can install one’s own system on the hardware — sure makes a difference, but most people will never practise this freedom. As long as Google, by default, hoovers in data from Google Glass (like it does on the Nexus series), the data is easily accessible to the Surveillance Industrial Complex. This ties into the previous post about peer-surveillance. There is no escaping it and there is reason to antagonise Google Glass as a concept, irrespective of whether one buys/uses it. A lot of people will have no choice as to whether their life(as dynamic imagery) is taken and then uploaded to a datacentre with weak data sharing/protection/retention policy. This is not the same as CCTV. Here we talk about videos that are captured in private spaces, too, more so than surveillance drones whose motion is limited to aerial and is still privacy-infringing, albeit they’re less ubiquitous due to cost, air traffic control, legislation and so on.

This is not about resisting a brand. It’s not hating advancement or fearing the future as Google likes to paint it. It is about telling the difference between marketing (the technology for Google Glass as an implementable concept has been around for decades) and societal effects. It’s like antagonising proprietary software for its effects on society, regardless of practical uses. Fog Computing (‘cloud’) should be rejected on similar grounds. Not everything that can be done should be done, at least or especially if it disregards the consent of non-participants.

To the user, the novelty here is the size of the hardware, the image resolution, and the wireless connection speeds (not related to Google at all).

To the Surveillance Industrial Complex, the novelty here is the ability to access a private (i.e. not accessible by us) database of videos for any given person queried (identity can be derived in a variety of ways, ranging from inter-personal connections to audio, video, and geographical location).

Schestowitz.com Statistics: Over 10% of Visitors Use GNU/Linux

4 days before the end of this month

schestowitz.com 2013 stats

chestowitz.com 2013 stats for OS

Stallman on Ubuntu

Last year I asked Dr. Stallman to comment on what Ubuntu/Canonical had done with regards to privacy and since then he has expressed his view very clearly, most recently in this video.

Graphically Managing Server-hosted Drives (Over SSH)

On interacting with various file servers or client servers as through they are local

Background

Sometimes we may wish to allow users, logged in remotely (away from their main workstations) or wishing to connect to another host where essential files are located, to access those files. A convenient way to achieve this without proprietary protocols is SSH in SCP ‘mode’, meaning that OpenSSH is being used to gather information about remote filesystems and pass files across the network upon demand. There is a convenient ways to manage this in UNIXy file managers These are unified by the universal command-line syntax, but the front ends may vary and be based on Qt, GTK, etc. Here is a demonstration of how it is to be achieved in Dolphin (KDE) in order to remotely log in to an SSH-enabled (running sshd) host.

Connecting to the Server

File managers typically have an address bar, which in simplified interfaces are not editable unless one clicks the universally-accepted CTRL+L (for location), which then replaces a breadcrumbs-styled interface with an editable line. Here is an example of Dolphin before and after.

dolphin-address-bar.png

Now, enter the address of the server with the syntax understood by the file manager, e.g. for KDE:


fish://[USER name]@[SERVER address]

One can add ":[DIRECTORY path]" to ascend/descend in the accessed server.

The syntax is the same for Konqueror and a few other file managers, with the exception of the “fish://” part, which is handled by kio. Here is what the password prompt/dialogue may look like.

login.png

Syntax may vary where the protocol, SSH in this case, is specified, but the port number if always the same and Nautilus can handle this too. Once the remote filesystem is shows like a local file system it can be dragged into the shortcuts pane, bookmarked, or whatever the file manager permits for fast access, including the facility for remembering the password/s (handled by kwallet in KDE).

The Nautilus Way

I have installed Nautilus to document the process for Nautilus as well.

The process can be done with the GUI in Nautilus. This is to be achieved slightly differently and it take a little longer. Here are simple steps:

Step 1

Open Nautilus (installed under KDE in this instance, using Qt Curve in this case).

nautilus-scp-step1.png

Step 2

Collapse the “File” menu.

nautilus-scp-step2.png

Click “Connect to Server…”

Step 3

Choose SSH, unless another protocol is desired in this case.

nautilus-scp-step3.png

Step 4

Enter the server name (or IP address). Optionally enter the port number (if different from the standard port for this protocol), path (called “Folder”) and of course the username (“User Name”). Shortcuts can be created by using the options beneath.

nautilus-scp-step4.png

Step 5

Finally, enter the password and access is then granted.

nautilus-scp-step5.png

By keeping passwords in memory or disk one can more rapidly and transparently access the remote drive again, reaching files seamlessly.

Working on Files Remotely

This is where a lot of power can be derived from the above process. Using KIO slaves in KDE, for instance, files can be opened as though they are stored locally and when an application saves (applied changes) to those files, KIO will push the changed file into the remote file store (working in the background). This means that headless servers can be interacted with as though they are part of the machines that access them. No need for X Server, either. Since many machines out there are configured to be minimal (no graphical desktop), this sure proves handy.

opening-over-ssh.png

Backing Up Bootloader and Home Directories Over the Network

Backups that are robust and resistant to disasters like fire should be distributed. Backing up by writing to external drives is good, but it is not sufficient. Here is how I back up my main machine. Backup is quick in a smaller number of rather large files (not too large as some file systems cannot cope with those), so one trick goes like this:


sudo mkdir /media/backupdrive/`date +%Y-%m-%d` # (if disk space permits, or via external mounted drive for another safety net)
sudo tar -cf - /home/|split -b 1000m - /media/backupdrive/`date +%Y-%m-%d`/Home-`date +%Y-%m-%d`.tar.

This includes all the hidden files.

To reassemble:


cat *|tar -xf 

this is then scp‘d or rsync‘d over to another box based on a wildcard or directory like


/media/backupdrive/`date +%Y-%m-%d`

This leaves a stack of backups in case the latest one is already ‘contaminated’. rsync of the original files, uncompressed and untarred, can be fast except for the first time, but for a stack of backups it’s not suitable.

But more files need to be backed up, depending on the bootloader for instance.

MBR backup and restore is done with dd:


dd if=/dev/sdX of=/tmp/sda-mbr.bin bs=512 count=1

dd if= sda-mbr.bin of=/dev/sdX bs=1 count=64 skip=446 seek=446

If this is saved to the home directory, then it’s included in the main backup.

Retrieval statistics: 21 queries taking a total of 0.174 seconds • Please report low bandwidth using the feedback form
Original styles created by Ian Main (all acknowledgements) • PHP scripts and styles later modified by Roy Schestowitz • Help yourself to a GPL'd copy
|— Proudly powered by W o r d P r e s s — based on a heavily-hacked version 1.2.1 (Mingus) installation —|