Introduction About Site Map

XML
RSS 2 Feed RSS 2 Feed
Navigation

Main Page | Blog Index

Archive for the ‘CMS’ Category

Our Drupal Interview With Jeffrey A. “jam” McGuire, Open Source Evangelist at Acquia

Tux Machines has run using Drupal for nearly a decade (the site is older than a decade) and we recently had the pleasure of speaking with Jeffrey A. “jam” McGuire, Open Source Evangelist at Acquia, the key company behind Drupal (which the founder of Drupal is a part of). The questions and answers below are relevant to many whose Web sites depend on Drupal.

1) What is the expected delivery date for Drupal 8 (to developers) and what will be a good point for Drupal 6 and 7 sites to advance to it?

 

Drupal 8.0.0 beta 1 came out on October 1, 2014, during DrupalCon Amsterdam. It’s a little early for designers to port their themes, good documentation to be written, or translators to finalise the Drupal interface in their language – some things are still too fluid. For coders and site builders, however, it’s a great time to familiarise yourself with the new system and start porting your contributed modules. Read this post by Drupal Project Lead, Dries Buytaert; it more thoroughly describes who and what the beta releases are and aren’t good for: “Betas are good testing targets for developers and site builders who are comfortable reporting (and where possible, fixing) their own bugs, and who are prepared to rebuild their test sites from scratch when necessary. Beta releases are not recommended for non-technical users, nor for production websites.”

 

With a full Release Candidate or 8.0.0 release on the cards for some time in 2015, now is the perfect time to start planning and preparing your sites for the upgrade to Drupal 8. Prolific Drupal contributor Dave Reid gave an excellent session at DrupalCon Amsterdam, “Future-proof your Drupal 7 Site”, in which he outlines a number of well-established best practices in Drupal 7 that will help you have a smooth migration when it is time – as well as a number of deprecated modules and practices to avoid.

 

2) What is the importance of maintaining API and module compatibility in future versions of Drupal and how does Acquia balance that with innovation that may necessitate new/alternative hooks and functions?

 

The Drupal community, which is not maintained or directed by Acquia or any company, has always chosen innovation over backward compatibility. Modules and APIs of one version have never had to be compatible with other versions. The new point-release system that will be used from Drupal 8.0.0 onwards – along with new thinking among core contributors and the broader community – may change this in future. There has been discussion, for example, of having APIs valid over two releases, guaranteeing that a Drupal 8 module would still work in Drupal 9 and that a Drupal 9 module would work in Drupal 10. Another possibility is that this all may be obviated in the future as moves toward broad intercompatibility in PHP lead to the creation of PHP libraries with Drupal implementations rather than purely Drupal modules.

 

3) Which Free/libre software project do you consider to be the biggest competitor of Drupal?

 

The “big three” FOSS CMSs – Drupal, WordPress, and Joomla! – seem to have settled into roughly defined niches. There is no hard and fast rule to this, but WordPress runs many smaller blogs and simpler sites; Joomla! projects fall into the small to medium range; and Drupal projects are generally medium to large to huge and complex. Many tech people with vested interests in one camp or another may identify another project as “frenemies” and compete with these technologies when bidding for clients, but the overall climate between the various PHP and open source projects is friendly and open. Drupal is one of the largest free/libre projects out there and doesn’t compete with other major projects like Apache, Linux, Gnome, KDE, or MySQL. Drupal runs most commonly on the LAMP stack and couldn’t exist or work at all without these supporting free and open source technologies.

 

NB – I use the term “open source” as synonymous shorthand for “FOSS, Free and Open Source Software, and/or Free/libre software”.

 

4) Which program — proprietary or Free/libre software — is deemed the biggest growth opportunity for Drupal?

 

Frankly, all things PHP. Drupal’s biggest growth opportunity at present is its role as an innovator and “meta-project” in the current “PHP Renaissance”. While fragmented at times in the past, the broader PHP community is now rallying around common goals and standards that allow for extensive compatibility and interoperability between projects. For the upcoming Drupal 8 release, the project has adopted object-oriented coding, several components from the Symfony2 framework, a more up-to-date minimum version of PHP (5.4 as of October 2014), and an extensive selection of external libraries.

 

On the one hand, Drupal being at the heart of the action in PHP-Land allows it and its community of innovators to make a more direct impact and spread its influence. On the other hand, it is now also able to attract even more developers from a variety of backgrounds to use and further develop Drupal. A Symfony developer (who has had a client website running on Drupal 8 since summer 2014) told me that looking under the hood in Drupal 8, “felt very familiar, like looking at a dialect of Symfony code.”

 

NB – I use the term “open source” as synonymous shorthand for “FOSS, Free and Open Source Software, and/or Free/libre software”.

5) To what degree did Drupal succeed owing to the fact that Drupal and all contributed files are licensed under the GNU GPL (version 2 or 3)?

 

“Building on the shoulders of giants” is a common thread in free and open source software. The GPL licenses clearly promote a culture of mutual sharing. This certainly applies to Drupal, where I can count on huge advantages thanks to benefitting from more than twelve years of development, 100k+ active users, running something like 2% of the Web for thousands of businesses, and millions of hours of coding and best practices by tens of thousands of active developers. Our code being GPL-licensed and collected in a central repository on Drupal.org has allowed us to build upon the strengths of each other’s work in a Darwinian environment (”bad code dies or gets fixed” – Jeff Eaton) where the best code rises to the top and becomes even better thanks to the attention of thousands of site owners and developers. The same repository has contributed to a reputation economy where bad actors and dubious or dangerous code has little chance of survival.

 

The GPL 2 is business friendly in that the license specifically allows for commercial activity and has been court tested. As a result, there is very little legal ambiguity in adopting GPL-licensed code. It also makes clear cases for when code needs to be shared as open source and when it doesn’t (allowing for sites to use Drupal but still have “proprietary” code). The so-called “Web Services Loophole” caused some controversy and discussion, but also opened the way to SaaS products being built on free/libre GPL code. Drupal Project Lead Dries Buytaert explained this back in 2006 (read the full post here):

 

“The General Public License 2 (GPL 2), mandates that all modifications also be distributed under the GPL. But when you are providing a service through the web using GPL’ed software like Drupal, you are not actually distributing the software. You are providing access to the software. Thus, a way to make money with Drupal is to sell access to a web service built on top of Drupal. This is commonly referred to as the web services loophole.”

 

Business models remain challenging in a GPL world; nothing is stopping me from selling you GPL code, but nothing is stopping you from passing it on to anyone else either. App stores, for example, are next to impossible to realise under these conditions. Most Drupal businesses are focused on value add services like site building, auditing and consulting of various kinds, hosting, and so on, with a few creating SaaS or PaaS offerings of one kind or another.

 

NB – I use the term “open source” as synonymous shorthand for “FOSS, Free and Open Source Software, and/or Free/libre software”.

 

6) What role do companies that build, maintain and support Drupal sites play in Acquia’s growth and in Drupal’s growth?

 

Acquia was the first company to offer SLA-based commercial support for Drupal (a Service Level Agreement essentially says, “In return for your subscription, Acquia promises to respond to your problems within a certain time and in a certain manner”). The specifics of response time and action vary according to the level of subscription, but these allowed a new category of customer to adopt Drupal: The Enterprise.

 

Enterprise adoption – think Whitehouse.gov, Warner Music, NBC Universal, Johnson & Johnson – of Drupal resulted in increased awareness and therefore even further increased adoption (and improvement) of the platform over time. Everyone who delivers a successful Drupal project for happy clients improves Drupal for everyone else involved. The more innovative projects there are, the more innovation flows back into our codebase. The more happy customers there are, the more likely their peers are to adopt Drupal, too. Finally, the open source advantage also comes into play: it behooves Drupal service providers to give the best possible service and deliver the highest-quality sites and results. If they don’t, there is no vendor lock-in and being open source at scale also means you can find another qualified Drupal business to work with if it becomes necessary. Acquia and the whole, large Drupal vendor ecosystem simultaneously compete, cooperatively grow the project (in code and happy customer advocates), and act as each other’s safety net and guarantors.

 

NB – I use the term “open source” as synonymous shorthand for “FOSS, Free and Open Source Software, and/or Free/libre software”.

 

7) How does Acquia manage and coordinate the disclosure of security vulnerabilities, such as the one disclosed on October 15th?

Acquia as an organisation is an active, contributing member of the Drupal community and it adheres strictly to the Drupal project’s security practices and guidelines, including the Drupal project’s strict procedure for reporting security issues. Many of Acquia’s technical employees are themselves active Drupal contributors; as of October 2014, ten expert Acquians also belong to the Drupal Security Team. Acquia also works closely with other service providers, whether competitors or partners, in the best interests of all of us who use and work with Drupal. This blog post, “Shields Up!”, by Moshe Weizman explains how Acquia, in cooperation with the Drupal Security Team and some other Drupal hosting companies, dealt with the recent “Drupalgeddon” security vulnerability.

How to Patch Drupal Sites

My experience patching Drupal sites is years old and my general ‘policy’ (habit) is to not upgrade unless or until there is a severe security issue. It’s the same with WordPress, which I’ve been patching in several sites for over a decade. Issues like role escalation are not serious if you trust fellow users (authors) or if you are the sole user. In the case of some agencies that use Drupal, it might be safe to say that the risk introduced by change to code outweighs the safety because as far as one can tell, visitors of such sites do not even register for a username. All users are generally quite trusted and they work closely (one must have checked the complete list to be absolutely sure). There is also ‘paper trail’ of who does what, so if one was to exploit a bug internally, e.g. to do something s/he is not authorised to do, it would be recorded, which in itself acts as a deterrent.

If the security issue is trivial to fix with a trivial patch, then I typically apply it manually. When the SQL injection bug surfaced some months back that’s what many people did for the most part. For larger releases (not bug fixes) the same applies, until there is no other alternative. What one needs to worry more about are module updates, especially those that are security updates. One should make a list of all modules used and keep track of news or new releases (watching general FOSS news is usually not enough until it’s too late). Thankfully, detailed information on what the flaws are becomes available, along with associated risks both for core and additional/peripheral modules.

Then there’s testing, which I guess one needs to do for any changes that are made, assuming time permits this. The last major Drupal flaw had a 7-hour window between publication and exploitation in vast numbers (maybe millions). It means one cannot always follow the formal procedure of testing, albeit testing in an ad hoc way or minimising the risk by applying a patch ought to work well. This leads me to suggesting that developers don’t need to have one uniform workflow/process for changing Drupal but a multi-faceted one. Proposal:

If the flaw is

1. severe
2. not back-end (i.e. not related to role management)

consider the complexity of the patch and test immediately on an existing copy of the site, then deploy on ‘live’.

If the patch is a core patch, no alternatives exist. If the patch is to be applied to a module, study the effect of disabling the module (assuming no dependents), consider temporarily keeping it out of reach (public site/s).

For less severe flaws:

1) merge into git on a dedicated branch
2) test on a local vagrant installation
3) schedule for deployment to “development” for testing
4) schedule for deployment to “staging”
5) run regressions (one needs to define these)
6) Client to do any required acceptance testing
7) schedule for deployment to production.

Suffice to say, the changes should not only be made through git (not directly) but a database dump too (or snapshot) should be taken, both for quick fixes and for longer testing purposes because even if changes are revoked (git rollback) the database can be left in a sub-par/inadequate state.

Regressions of interest for Drupal are not just site-specific. There are some nice templates for these and one needs to consider which modules to use in the site. Intuition and general familiarity with the CMS loop/hooks help one predict what impact a change would have on modules, if any. Drupal has good documentation of functions (by names), so these too can be studied before changes are made. To avoid some modules ‘silently’ breaking, following any change to core (or even modules) one may need to go through a list of tests. specified in advance, that help verify no module spits out PHP errors or behaves oddly. It is common to test critical pages first, e.g. finding an authority, research reports, etc. Sometimes it should be possible to also automate the testing by basically making local snapshot of pages of interest and then diff‘ing them after changes are made, using sophisticated tools like Versionista or a manual side-by-side comparison by a human operator. There are browser extensions that further facilitate this, but caching such as Cloudflare, varnish cache etc. can impede this process (even though changes to underlying code may invoke an override, at least for varnish).

Regressions are nice, but in many cases developers don’t have time to run them and a simpler set of manual checks can help gain confidence that changes made have no detrimental effects.

I cannot recall ever having major issues patching (as opposed to upgrading) the core or WordPress and Drupal and I have done this hundreds of times. The quality of testing when it comes to core (not external/additional) is quite high, but another worthy step is, before making any changes, look around forums to see what experience other people have had. There were cases where patches were problematic and this quickly became public knowledge; sometimes workarounds or patches for the patches are circulated within hours.

Background reading

PhpWiki is Obsolete and PHP is to Blame

TEN years ago (or more) I installed PhpWiki. I used it to manage communications with family and friends (privately), then to collaborate with colleagues (I had co-authored papers at the time, so needed to manage version changes) in a publicly-accessible installation (my second installation) of an up-to-date version of PhpWiki. It was up-to-date at the time, but not anymore. I later installed PhpWiki once again in iuron.com. It was the main CMS there (my only site to be managed primarily by PhpWiki).

PhpWiki is an important piece of software because it was the first public-available piece of software running a wiki using PHP. It goes back to 1999 when very few people knew what a wiki was. Its authors deserves big thanks for releasing it under the GPL.

I have found that some people — like myself — had their data ‘locked’/’trapped’ inside PhpWiki and as PHP is not backward-compatible they sought a way out (PhpWiki just stopped working with an upgrade of PHP). Various bits of wiki software offer importers of PhpWiki, but they don’t deal with the database directly, they require exporting of data (which earlier versions of PhpWiki don’t appear to have, except perhaps from the command line). Upgrading PhpWiki is hard because my host for this Web site offers no access to PHP log/error files and things do not work as expected. The only way to really view the Wiki data at the moment is through PHPMyAdmin. What a mess.

Having spent several hours wrestling with this issue of no upgrade and no export available (I checked numerous importers), I am left with no choice but to manually migrate the data to some other Wiki software. I have already set up FOSWiki and MediaWiki before, but I might be curious enough to try something ‘new’ (or exotic) like PmWiki, DokuWiki, WikkaWiki. The problem, however, is that they too might become deprecated/unmaintained one day in the future, leading to the same problem I am having right now.

PhpWiki is still being developed (just not frequently) and there are newer releases. I will always fondly remember the one Wiki software that I set up and used before people knew much about Wikis (except perhaps the example of WikiPedia). Here are some final screenshots of the Wikis that will soon go dark, despite the fact that I customised them a great deal and modified the code to suit my needs.

If only PHP was eager to keep old software functional (compatible with newer versions of PHP) this wouldn’t have happened. I still have several other CMSs to go through, trying to upgrade, hack, or export from. Why? Well, because a forced upgrade to PHP 5.3 will kill them. I mostly blame PHP here. It’s an applications killer. It didn’t have to end like this and I am not the first to complain about it.

PhpWiki

PhpWiki private

Updating and Adding Modules to Drupal From the Command Line

Occasionally it is needed to gain access to sites that are behind firewall/s and also necessary to install software on them. This is when the graphical front ends may ‘break’ (refuse to work as expected), either because Drupal cannot access the outside world or because the outside world cannot access the Drupal instance. Drush would suffer because it depends on connection with the outside world (usually a two-way connection). If connection is made possible from the server to the outside world but not vice versa, then we can ‘pull’ files into the server, via SCP for example (or rsync). So in practice things are little more complicated, but not impossible to cope with.

Having gained access to the said server (e.g. ssh -p PORT_NUMBER USERNAME@SERVER_ADDRESS), it should be simple to see where files are transmitted from to visitors, either by checking the Apache configurations (usually under /etc/apache* for Debian-based distribution and /etc/http* for Red Hat-derived distributions). Another option is to use locate, finding some key Drupal files such as xmlrpc.php. In practice this latter option would be faster than to check Apache configuration files, so upon login run locate xmlrpc.php and go to the suitable location (e.g. /home/drupalsites/ or /var/www/), under /modules or /sites/all/modules (typically in Drupal 7). Modules are further sub-divided; under /contrib one should usually put non-core modules.

If this is an attempt at upgrading a module, then it would be wise to make a backup before replacing anything, e.g. mv MODULE_NAME /home/drupalsites/ to remove it or cp -r MODULE_NAME /home/drupalsites/ to make a copy. Use sudo su if permissions are insufficient, but don’t use sudo sparingly if it’s a live Web site.

If the module file (usually compressed archive) is accessible over the Web, then use wget. Otherwise, pull it from another server or a desktop with scp, e.g. scp USER@SERVER:/home/user/module.tar.gz .

For ZIP files use unzip/gunzip and for .tar.gz use the common piping tricks, e.g. tar cvf - foodir | gzip > foo.tar.gz. When deflated, these files should turn into directories, which can be put in place while the compressed archive gets deleted with rm. Set groups and owners appropriately, e.g. to www-data on CentOS/Red Hat and apache for Debian/Debian-derived. This can be done recursively also, e.g.:

chown -R www-data ./MODULE

chgrp -R www-data ./MODULE

The module/s should not be selectable from the front end (the Drupal site, under Modules) and there should be no permission issues. In case of problems it should be possible to revert back to other versions by restoring from the backup (if made) or by simply removing, with rm -rf, the ‘offending’ module. On Red Hat-based systems there is also room for debugging at code level, e.g.:

tail -n600 /var/log/httpd/error_log

tail -n600 /var/log/httpd/access_log

The exact file names depend on Apache configurations. System-level debugging can also be done with help from /var/log/messages (Red Hat) and /var/log/syslog (Debian), essentially tail‘ing them:

tail -n600 /var/log/messages

To update the whole of Drupal from the command line (without drush):

In Drupal 7:

  • make a backup of the database
  • Fetch the latest version, e.g. wget http://ftp.drupal.org/files/projects/drupal-7.24.zip
  • unzip drupal-7.24.zip
  • Go to maintenance mode at: http://YOUR_SITE/#overlay=admin/config/development/maintenance

Assuming the site is at public_html/home/:

  • cd public_html/home/
  • mv sites/ ../..
  • rm -rf OR mv * ~/backup
  • Go to the directory of the new version and rm -rf sites/
  • cp -r * ../public_html/home/

Always MAKE BACKUPS of both files and DBs. This ensures that reverting back will be possible.

For Drupal 6:

  • Go to YOUR_SITE/admin/settings/site-maintenance and set the site to maintenance mode
  • Assuming the site is at /var/www/: cd /var/www/
  • Get latest version, e.g.: wget http://ftp.drupal.org/files/projects/drupal-6.30.zip
  • unzip drupal-6.30.zip
  • cd drupal-6.30
  • rm -rf sites
  • rm robots.txt
  • rm .htaccess
  • rm -rf themes/
  • rm -rf profiles
  • chgrp -R apache * (for Debian systems usually)
  • chown -R apache *
  • unalias cp
  • cp -rfp * ../html/
  • Set .htaccess to allow upgrade.php access, then update
  • Go to YOUR_SITE/admin/settings/site-maintenance and set the site to be accessible to all again

The above pretty much ensures that even when a site is heavily guarded and exists behind firewalls it can be maintained, extended, and kept up to date. Public-facing sites tend to be easier to maintain.

Moodle Import of Users and Courses, Automatic Enrollment

Adapted from a work project in order for a more general audience to benefit

744px-Moodle-logo.svg

Provided one stores data in a format which is importable into Moodle (comma-separated list with specific data fields), a lot can be achieved very rapidly, making a migration to this powerful Free/libre software VLE rather painless. After some hours of work on a test site I decided to share my lessons so that others can benefit. In the spirit of sharing I will also take note of common errors that may come up and how to overcome them. If data export from an old VLE system is not consistent with the requirements of Moodle, then the migration may not be so simple. It may require several iterations.

Using a test site, which was recently upgraded to the latest version of Moodle (2.5.x), I read some test files and encountered issues. The process did not work as smoothly as I had expected and I really needed debugging mode enabled, as we shall see later.

Importing Users

Debugging mode yielded the following error for example:

 

Debug info: Incorrect integer value: ‘yes’ for column ‘autosubscribe’ at row 1
INSERT INTO vle_user (username,firstname,lastname,email,password,institution,idnumber,phone1,phone2,address,autosubscribe,mnethostid,city,country,lang,timezone,mailformat,maildisplay,maildigest,htmleditor,department,url,description,descriptionformat,auth,confirmed,timemodified,timecreated,suspended) VALUES(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
[array ( [redacted]
)]
Error code: dmlwriteexception

 

 

Stack trace:

 

  • line 426 of /lib/dml/moodle_database.php: dml_write_exception thrown
  • line 1089 of /lib/dml/mysqli_native_moodle_database.php: call to moodle_database->query_end()
  • line 1131 of /lib/dml/mysqli_native_moodle_database.php: call to mysqli_native_moodle_database->insert_record_raw()
  • line 793 of /admin/tool/uploaduser/index.php: call to mysqli_native_moodle_database->insert_record()

 

 

Replacing “,yes” with “,1″ (replace all, using sed or a text editor) yields a properly formatted file that has integers for booleans, overcoming the first barrier.

The next issue is field length:

Debug info: Data too long for column ‘address’ at row 1
INSERT INTO vle_user (username,firstname,lastname,email,password,institution,idnumber,phone1,phone2,address,autosubscribe,mnethostid,city,country,lang,timezone,mailformat,maildisplay,maildigest,htmleditor,department,url,description,descriptionformat,auth,confirmed,timemodified,timecreated,suspended) VALUES(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
[array ( [redacted]
)]
Error code: dmlwriteexception

 

 

Stack trace:

 

  • line 426 of /lib/dml/moodle_database.php: dml_write_exception thrown
  • line 1089 of /lib/dml/mysqli_native_moodle_database.php: call to moodle_database->query_end()
  • line 1131 of /lib/dml/mysqli_native_moodle_database.php: call to mysqli_native_moodle_database->insert_record_raw()
  • line 793 of /admin/tool/uploaduser/index.php: call to mysqli_native_moodle_database->insert_record()

 

 

I have abbreviated this by truncation to overcome the issue, repeating for a few other users (relatively speaking, as it’s almost 10% of all users in my test case), resulting in an issue with long phone numbers (data needs to be constrained further):

Debug info: Data too long for column ‘phone1′ at row 1
INSERT INTO vle_user (username,firstname,lastname,email,password,institution,idnumber,phone1,phone2,address,autosubscribe,mnethostid,city,country,lang,timezone,mailformat,maildisplay,maildigest,htmleditor,department,url,description,descriptionformat,auth,confirmed,timemodified,timecreated,suspended) VALUES(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
[array ( [redacted]
)]
Error code: dmlwriteexception

 

Stack trace:

  • line 426 of /lib/dml/moodle_database.php: dml_write_exception thrown
  • line 1089 of /lib/dml/mysqli_native_moodle_database.php: call to moodle_database->query_end()
  • line 1131 of /lib/dml/mysqli_native_moodle_database.php: call to mysqli_native_moodle_database->insert_record_raw()
  • line 793 of /admin/tool/uploaduser/index.php: call to mysqli_native_moodle_database->insert_record()

 

 

A few other phone number data fields contained alphabetic characters in them, not just a lot of spurious characters that make the length exceed the limits.

The address (and other) field length constraints are noted in here.

To summarise, three issues needed to be addressed:

  • Boolean fields should be encoded in binary (0 or 1), not in words
  • Phone numbers should be made compact, with no irregular elements in them
  • The address length must be limited, or Moodle’s database schema altered to accommodate long addresses

Here is a visual illustration of the steps involved.

The following image shows the page facilitating users import.

upload1

 

The data file can be either dragged or added using the following file selector.

upload1-selector

Preview of a specified length is then displayed, just before data import starts.

upload2

Further options are made available to better control the import process.

upload2-settings

Errors may be encountered, in which case debugging mode needs to be enabled (making the output verbose).

debug

If no errors are encountered, users will be added and a summary shown at the end to highlight weaker alerts (such as weak passwords).

upload3

Congratulations. The users are now imported and are searchable, modifiable, and manageable too.

users

It is worth noting that fields can be specified for City/town and Country rather than them being mass-handled, defaulting to whatever the installation process was defined to have.

Importing Courses

Course creation should be possible using a file with comma-separated values and a plug-in which will be part of Moodle core in version 2.6 (or later). It should be noted that the import feature is not yet included in the stable versions of Moodle and thus it needs to be added manually, unless one hops onto a test build (nightly) of Moodle, probably version 2.6.

As noted in here, “this is now in Moodle core as of 2.6. Moodle Admin Tools plugin for basic upload of course outlines, and applying templates using Moodle course backup files”

There are 315 installed plug-ins in the test environment, 60 of which are disabled. The plug-in to install is downloadable from github and it comes with a README.txt summary of steps to take. Here is the process summarised visually.

Place the uncompressed archive at the standard destination directory for plug-ins.

plugin0

Now, log out of Moodle and log in again, as “admin”, in order for the upgrade to be invoked.

plugin1

Continue as instructed.

plugin2

Ensure all the code is up to date and thus compatible.

plugin3

Permissions are very important here. Ensure that permissions of 644 and 755 (for sub-directories) get set/assigned by chmod to all files, otherwise there will be internal server errors (code 500). If all works correctly, a new option will appear.

plugin4

This can be accessed if the permissions on the files are correct.

plugin5

Upload form is similar to the one for “Users”.

plugin6

This part may be tricky, as I was getting the following common error with the existing data exported from the old system: “There is something wrong with the format of the CSV file – please check the number of headings and columns match, and that the delimiter and file encoding are correct (don\t use comma-quoted as Moodle does not support it): Invalid CSV file format – number of columns is not constant!”

I have tried many variations of this, reducing the data file to as little as two lines (several different lines with headers) to see if this helps overcome the above issue, but it always comes back with the same error message.

Oddly enough, even the examples at the official documentation page does not work.

For instance:

fullname,shortname,category,sortorder,idnumber,summary,format,showgrades,newsitems,teacher,teachers,student,students,startdate,numsections,maxbytes,visible,groupmode,timecreated,timemodified,password,enrolperiod,groupmodeforce,metacourse,lang,theme,cost,showreports,guest,enrollable,enrolstartdate,enrolenddate,notifystudents,expirynotify,expirythreshold,teacher1_role,teacher1_accountGreatest Course,GC101,Education Portfolios,1,,University Portfolio,topics,0,0,Owner,Owners,Visitor,Visitors,1/14/2008,10,15728640,0,0,1/12/2008,1/12/2008,portfolio,0,0,0,,,,0,2,1,1/14/2008,5/10/2008,0,0,10,editingteacher,lastname1.firstname@email.edu

do not work; basically, this is what the plug-in returns:

 

“sortorder” is not a valid field name

 

 

 

A similar error message is returned if much of the content from the original test file is watered down to just a few lines. This might be an issue with the importer, which is not yet included in the core program (not reliable enough for release). Without being able to automate course import it will be hard to automate its linking to users.

Upon further examination, I can import some data by removing the field “sortorder” and removing many of the fields after “summary” (some of them are not supported and trigger errors during interpretation/parsing).

Some more debugging and changes to the structural nature of the data would enable more data from the old system to be preserved and imported. Here is how the GUI is used to make up for missing information at import time:

plugin7

And further down there is more.

plugin8

In summary, it seems like the fields need changing before import can be achieved with maximal reuse/preservation of data. This needs some more trial and error. The plug-in is not so well documented and it is work in progress, as explained in the Moodle tracker (see comments).

To use a most simplified example to test import, consider the following input file:

fullname,shortname,category,idnumber,summary
Competitive Athlete,CA,”Miscellaneous”,1,”Test”

The corresponding import success notification would look as follows.

course1

In Courses, field names that are properly coded and technically supported are not necessarily as documented (some documentation is well out of date). “teachers”, for example, is not a valid field name and “sortorder” is causing issues too. The data format is not ‘Moodle-ready’ unless all the fields are understood and the data can thus be parsed. It may be necessary to make many modifications for Moodle to absorb some data. There are some more undocumented fields in which are not supported, “students” for example. One can omit such fields manually (or using spreadsheets software) for purposes of testing imports until the code stablises and the suitable documentation brought into alignment. “category” is an important field name as it specifies what the courses is clustered with. This needs to be a string, not a number, e.g. “Miscellaneous” (default category in Moodle). If the data about courses can now be imported for testing purposes, then the next step may follow. I shall be documenting progress of importing content, which I found while I looked for the best solution for mass import of raw HTML data (which lacks some metadata/information relating to how it’s linked to a course). This needs to be done semi-manually with mass conversions although it depends on how the data being imported into Moodle actually was constructed/encoded. Some move away from Blackboard and some from privatly-crafted systems that act as a VLE.

Importing Course Content

In order to import static HTML content as pages into courses (a catch-all case because all pages are reducible to that) we need to hack around the existing framework or find a plug-in. In Moodle 2.5.x, three main components exist for retaining course information and resources that are textual. These are:

  • Forum posts
  • Blogs
  • Messages

The above can be typed in using easy-to-use WYSIWYG interfaces and everything is date/time-stamped. Resources and activities are also possible to import, as described in http://blogs.sussex.ac.uk/elearningteam/2013/01/15/improving-moodle-import-part-3-the-application/ (one of several such posts from Sussex, which facilitates importing).

 

 

Mass-enrolling and privilege/role setting

It is necessary to first add “participants” to the course, based upon some import process. Go to the Participants menu.

course2.png

Edit the enrolled users.

course3.png

Click on “Enrol users”.

course4.png

Self-enrol the system administrator. Then, elevate privileges.

course5.png

Adding content

While it is possible to register to external blogs, as described in
http://docs.moodle.org/25/en/Using_Blogs#External_blogs , it would be preferable to add HTML files or mass-import them.

Other institution sought to achieve this and there are threads about it in https://moodle.org/mod/forum/discuss.php?d=35581 and http://docs.moodle.org/23/en/Import_course_data

The problem is, the import is from one Moodle part to another, as shown below.

course6.png

There is a video demonstration of this whole process in http://www.youtube.com/watch?v=xQStUOfDe5w

The course can be configured to have peripheral files, which may also be HTML files. There are many options there as shown below.

course7.png

Here is the part where uploads are facilitated.

course8.png

Uploaded data needs to be referenced by the VLE and I cannot find plug-ins that enable mass upload of enclosures which also notify Moodle through the database (referencing and dereferencing). Summary files can not be HTML formatted, so an alternative route is required for import. Questions, for example, have their own versatile importer, which looks as follows.

course9.png

 

Automating Enrolment

Apart from the GUI that allows selecting multiple users (see Mass-enrolling and privilege/role setting) and performing actions on all (e.g. assign/enrol/give role), we could use SQL queries (in MySQL front ends/command line) to find and apply changes outside the framework, but this is more safely done with plug-ins, which we need to find (it’s hard to find any). If we import the users with some additional custom fields, then they can be filtered based on what course they are enrolled to and then modified accordingly in the Moodle-specific fields.

A Year Without Facebook

Xmas

MY wife used to be using Facebook quite a lot. It’s a site that, in practice, probably violates people’s privacy more than all other sites combined, but only if one assumed loss of privacy to peers (as opposed to government spies and marketers, who also get data from Microsoft, Google, and countless other companies). Just over a year ago I suggested to my wife that she oughtn’t upload photos to Facebook (with some applications Facebook just uploads all taken pictures automatically) and that I can set up an album that would preserve some of her privacy (no tags, no face recognition, no covert tracking of viewers, etc.). Days ago I completed uploading the last photos of the year, covering as of late:

  • Christmas Party
  • Midland Hotel Suites
  • Bradford City, Hotel, Clothes
  • Day Out and Birthday at Cora’s Restaurant
  • Christmas in the House

Yesterday, the number of direct hits on photos (meaning watching a photo zoomed in) exceeded 100,000, demonstrating, in my opinion, that one does not need Facebook to managed one’s photos. As Facebook is, to many people, primarily a photo album with comments (it’s useless as a medium for news and other purposes), why would anyone really need Facebook? Self-hosting requires some work and money, but there’s a price to one’s privacy too. When you’re merely the product in Facebook — not the customer — it is clear why Facebook gives ‘free’ hosting. Here is some guidance on how to set up a similar photos album.

WordPress for Galleries

A new site I’ve launched, Maria Chain, uses a blogging software, WordPress, to act as a sort of photos gallery. This is the first time I set up such a Web site, presenting an artistic portfolio using WordPress.

Retrieval statistics: 21 queries taking a total of 0.156 seconds • Please report low bandwidth using the feedback form
Original styles created by Ian Main (all acknowledgements) • PHP scripts and styles later modified by Roy Schestowitz • Help yourself to a GPL'd copy
|— Proudly powered by W o r d P r e s s — based on a heavily-hacked version 1.2.1 (Mingus) installation —|