Introduction About Site Map

XML
RSS 2 Feed RSS 2 Feed
Navigation

Main Page | Blog Index

Archive for the ‘Security’ Category

OpenSUSE Web Site Cracked

And SUSE has not yet said anything about it (to publicly acknowledge this), it seems to have restored from backup or removed the defacement

OpenSUSE Cracked
Click to zoom

How to Patch Drupal Sites

My experience patching Drupal sites is years old and my general ‘policy’ (habit) is to not upgrade unless or until there is a severe security issue. It’s the same with WordPress, which I’ve been patching in several sites for over a decade. Issues like role escalation are not serious if you trust fellow users (authors) or if you are the sole user. In the case of some agencies that use Drupal, it might be safe to say that the risk introduced by change to code outweighs the safety because as far as one can tell, visitors of such sites do not even register for a username. All users are generally quite trusted and they work closely (one must have checked the complete list to be absolutely sure). There is also ‘paper trail’ of who does what, so if one was to exploit a bug internally, e.g. to do something s/he is not authorised to do, it would be recorded, which in itself acts as a deterrent.

If the security issue is trivial to fix with a trivial patch, then I typically apply it manually. When the SQL injection bug surfaced some months back that’s what many people did for the most part. For larger releases (not bug fixes) the same applies, until there is no other alternative. What one needs to worry more about are module updates, especially those that are security updates. One should make a list of all modules used and keep track of news or new releases (watching general FOSS news is usually not enough until it’s too late). Thankfully, detailed information on what the flaws are becomes available, along with associated risks both for core and additional/peripheral modules.

Then there’s testing, which I guess one needs to do for any changes that are made, assuming time permits this. The last major Drupal flaw had a 7-hour window between publication and exploitation in vast numbers (maybe millions). It means one cannot always follow the formal procedure of testing, albeit testing in an ad hoc way or minimising the risk by applying a patch ought to work well. This leads me to suggesting that developers don’t need to have one uniform workflow/process for changing Drupal but a multi-faceted one. Proposal:

If the flaw is

1. severe
2. not back-end (i.e. not related to role management)

consider the complexity of the patch and test immediately on an existing copy of the site, then deploy on ‘live’.

If the patch is a core patch, no alternatives exist. If the patch is to be applied to a module, study the effect of disabling the module (assuming no dependents), consider temporarily keeping it out of reach (public site/s).

For less severe flaws:

1) merge into git on a dedicated branch
2) test on a local vagrant installation
3) schedule for deployment to “development” for testing
4) schedule for deployment to “staging”
5) run regressions (one needs to define these)
6) Client to do any required acceptance testing
7) schedule for deployment to production.

Suffice to say, the changes should not only be made through git (not directly) but a database dump too (or snapshot) should be taken, both for quick fixes and for longer testing purposes because even if changes are revoked (git rollback) the database can be left in a sub-par/inadequate state.

Regressions of interest for Drupal are not just site-specific. There are some nice templates for these and one needs to consider which modules to use in the site. Intuition and general familiarity with the CMS loop/hooks help one predict what impact a change would have on modules, if any. Drupal has good documentation of functions (by names), so these too can be studied before changes are made. To avoid some modules ‘silently’ breaking, following any change to core (or even modules) one may need to go through a list of tests. specified in advance, that help verify no module spits out PHP errors or behaves oddly. It is common to test critical pages first, e.g. finding an authority, research reports, etc. Sometimes it should be possible to also automate the testing by basically making local snapshot of pages of interest and then diff‘ing them after changes are made, using sophisticated tools like Versionista or a manual side-by-side comparison by a human operator. There are browser extensions that further facilitate this, but caching such as Cloudflare, varnish cache etc. can impede this process (even though changes to underlying code may invoke an override, at least for varnish).

Regressions are nice, but in many cases developers don’t have time to run them and a simpler set of manual checks can help gain confidence that changes made have no detrimental effects.

I cannot recall ever having major issues patching (as opposed to upgrading) the core or WordPress and Drupal and I have done this hundreds of times. The quality of testing when it comes to core (not external/additional) is quite high, but another worthy step is, before making any changes, look around forums to see what experience other people have had. There were cases where patches were problematic and this quickly became public knowledge; sometimes workarounds or patches for the patches are circulated within hours.

Background reading

CCTV Not Effective

Surveillance camera

WITHOUT a doubt, there are circumstances where evidence extracted from CCTV is valuable. For instance, if there is a street/pub brawl, one can use footage to verify or falsify eyewitness accounts or the story told by those involved in a brawl.

For the most part, however, CCTV fails to justify its great cost, not just monetary cost but also the cost to our civil liberties. Today I got a good reminder of that.

Having spent nearly an hour speaking to security personnel and the local police, I found that CCTV did, in fact, capture the stealing of my hybrid bike (retails at around £500) roughly two hours ago. This was captured because I only ever park and chain my bike to solid objects like designated bike rails in front of cameras and in the presence of many people.

Not only did several cameras capture good footage of my bike being stolen but also the store manager (the store I was in for just 10 minutes) was at the parking lot witnessing the crime. Was that enough to prevent the crime? No. To capture the perpetrator? No. To return the stolen bike? No.

The perpetrator wore a hoodie, so it is hard to identify him (the footage only identifies him as a black man in his mid-twenties, to quote security personell who investigated it). It is too early to assume that the bike won’t be returned and the perpetrator caught, but the matter of fact is, CCTV, as I long argued (for many years), does not help prevention and rarely helps identification.

If the perpetrator is very naive, in which case he or she is removed from the scene early on, then it might work, but the hard cases cannot be resolved by CCTV. All that can be achieved is the confirmation that a certain crime occurred and in cases where an insurance agency is involved, it may help prevent insurance/benefit fraud. My bike was not insured. I don’t know any people who buy bike insurance.

Surveillance tools which are run and owned by the state (or law-enforcement agencies), as in CCTV, are not there to protect and arguably they do not serve as a deterrent either. They are probably not worth the investment. More people need to be on the ground, creating more jobs and adding to real security, not sci-fi pseudo-futuristic security theatre.

European Union/Commission Saves Us From Big Brother

Scanner

According to the news today (the theme one comes across by listening to any radio station in the UK), continental Europe comes to the rescue again. Here in Manchester, getting a cancer-causing scan is mandatory for boarding any plane in Manchester Airport. This is very profitable for some companies and their cronies who devised these ludicrous measured due to one guy with explosives in his underwear (an old incident whose casualties count is 0). As I have been stressing for almost a year, those machines that scan people as though they were suitcases are assured to kill (in the long run) more people than they would save by preventing explosives from going on planes through one’s breast area, crotch, etc. The whole thing is a sham and a cancer-generating pipeline that makes some industrialists rich. So anyway, the news here is that removal of all such machines has just been demanded by the authorities in the EU (probably Belgium and the surrounding aristocracy). This is good. No more will I need to confront airport staff over their stubbornness; why should they impose X-ray scans as a sort of blackmail prior to travel? What have we as a civilisation sunk to? And that’s not even delving into other issues such as the acquisition (with alleged retention) of naked pictures of every citizen who travels on a plane (via an increasing number of airports). Civil liberties — not just our health — are being jeopardised without taking simple risk calculations into account. Several months ago I did some maths related to this and came to the conclusion that unless those scanners can prevent 200 large planes from going down by detecting a passenger with explosives that cannot be detected in other means, the deaths due to the cancer will be greater. In order words, by placing those machines in the airport (lethal X-ray rebranded) they sign the death knell/sentence of many people and hardly save any lives. On numerous occasions I had discussions about this with staff who works around those machines and never could they provide a compelling explanation for why they participate in it (big brother cooperation). Perhaps the “I’ve got a mortgage to pay” is the best they can do. One persuasive method is to clarify to these people that their health too is at great risk and information about it withheld. Hopefully those machines will all get canned just lime the ID cards. Liberty and security don’t sit well together.

Improved Surfing and Browsing Notes

OVER time, I’ve accumulated a few notes about things that I bear in mind while surfing the Web or communicating with people. Here’s a quick breakdown.

Saving Evidence

Corporations, unlike us mere mortals, don’t care about preservation more than the law requires. That’s why the Bush Administration, Intel, Microsoft and many other companies purge E-mails and shred documents without any guilt or hesitation. I should really make copies of everything I cite (I rely too much on the Web Archive). A friend of mine wanted to automate this and create archives of all my posts, plus local copies of cited articles. With wget, these can be sorted nicely by URL, but I never do this. If you have any tips, please leave a comment.

WWW Privacy

A person can always use a proxy if there is fear of having IP addresses harvested. I set up this thing for myself a couple of years ago and I had to make it password protected because lots of people from Asia used up my bandwidth. Then, the only server that ever sees my IP address is mine. You’re essentially passing requests through a trusted middleman.

Since all routers are peers as well, there’s no way to get perfect independence from other people’s vigilant eye. One option is therefore to use Web mail via proxies, but then you’re still relying on the the proxy not giving away its log files (or destroying them every night).

I suppose you may have heard by now about the police demanding encryption keys from an animal rights activist over here in Britain. There’s increased mail monitoring as well, which is why I suggest that people get themselves covered. There’s also that Blogger (Google) incident from last month. Google gave away the IP to the police and used some excuse. Think of mini-microsoft (the blogger) and many others who rely on anonymity. Without anonymity, they will lose their jobs. It seems like an approaching end of an era. Authorities require greater control. They exposed too often, so they misuse their powers.

E-mail Privacy

Have you readers considered encrypting your E-mails? I do this with people who can. If you’re using Thunderbird and there’s an extension called Enigmail that will make it very simple for you to set up. It’s cross-platform too.

Remember that privacy is among your right. Don’t let people take it away from you and essentially treat you like a criminal. With less privacy, you are left powerful and exposed.

Linux Cannot be Trusted, With the Exception of Freedom

WE have entered a period when GNU/Linux desktops gradually become more widely accepted. An increasing number of people choose to migrate not only for cost savings, but also — because software takes more control of the user’s privileges over time — for freedom, which becomes attractive. To some, independence and choice are newly-realized traits and they are inherent in the software. In many cases and to many people, these traits were never understood or explored before, but they have a great deal of impact on behavioral and security. Thus, they are related to trust.

With changes in software paradigms — from closed source (proprietary) to open source — changes in mindset do not necessarily ensue. Ideological and conceptual views cannot be changed overnight. Experienced Linux users strive to find a point of balance wherein both worlds (and both mindsets) can settle and thrive together, without exclusion of peers.

It is often argued that openly sharing code leads to elegant solutions. Poor solutions perish whereas better ones evolve and spread. While many remain united by the goal of producing and supporting the best operating system and applications, there remain at least one divide; there are those who who argue in favour of full transparency and those who are more apathetic towards it.

Apathy gives more control over technical decision to parties other than the user him/herself. These leaves a door open to abuse of rights, which is usually motivated by financial interests.

Other divides involve learning curve (e.g. command-line versus GUI) and perception of intellectual property, but these divies rarely affect the development model and the quality of software. Different distributions of Linux address the needs of different users, yet there is at least one component that is shared by almost everyone — the kernel.

Computer code is hardened and bug are removed when more pairs of eyes reviewed its quality. It is a question of visibility. Visibility is trust. What happens, however, when partial visibility becomes a necessary evil? Increasingly, as the reach of Linux broadens, a desire is born to choose easier routes to working solution. As the technology-savvy crows becomes a minority among the userbase, principles are compromised.

Arguments about pragmatism arise whenever a company or an individual is unwilling to disclose secrets. If this company or individual is kind enough to meet half way, by providing a solution which enables function but insisting that this function remains cryptic, a dilemma becomes inevitable. If this gift is accepted and becomes widely adopted, it becomes difficult to beg for change.

The importance of open source drivers is largely underestimated. Due to their proximity the the core of an operating system, they can affect security, privacy, and stability. An open source platform cannot be truly understood unless subsystems are entirely visible.

A truly trustworthy system is one where there is an open route of visibility which extends downward to the lowest level. Such a system is needed to ensure that no single mind or faction is misusing its ability to embed self-serving and user-hostile code. Trust is as deep as the layer of the stack which defines separation between known and unknown — that which permits the user to access the core.

In the future, we are likely to see widespread use of free/open source BIOS, open specifications for graphics cards with an open source implementation, and processors that are open (consider Sun’s processors whose design is already licensed under the terms of the GNU GPL).

Due to the fact that Free Linux distributions take a lot of criticism, I’ve written an article. Free software is, sadly enough, largely misunderstood. Only days ago, Mark Pilgrim was ranting and Don Parris responded. My own 50 cents were posted in Datamation. The article could be called “The Importance of Gobuntu to the Goals of Linux”, but I chose a different (and more generic) headline. Gobuntu was born to serve specific needs. It is built for users to whom freedom is an important quality of the software they use. More in Datamation:

As GNU/Linux becomes more popular, the motives behind its inceptions are often forgotten. Linux is a free operating system, but its broadening userbase perceives this freedom as pertaining to cost, not rights and liberty.

Windows Botnets Put the Internet at Risk

WE often hear about the need to rebuild the Internet or at least rethink and revise its whole design. The problem, however, is not the Internet’s design. The Internet was built under the assumption that nodes in the network are well behaved and those that are not can be pulled out of it.

What do you get when one single node and one evil mastermind controls millions of these nodes? That’s where the poor security — a wet dream to government that wanted back doors available in every PC — comes into play. Windows is on the brink of destroying the Web. Sadly, the mainstream media does not give this much coverage, for obvious reasons. The article cited here (via one bloggers’ interpretation) talks about the Storm botnet.

“Storm” is nothing compared to the whole. Vint Cerf, one of the fathers and architects of the Internet, says there are 100-150 Microsoft Windows zombies out there. That’s a large proportion of the PCs in the world and it’s a ticking time bomb. The criminals use only a fraction of the PCs’ capacity at the moment, but they do some test runs sometimes, e.g. knocking down DNS almost, i.e. ‘killing’ the Internet. That one type of attack came from Korea about a year ago.

There were also those botmasters who were also doing some heavy spamming last Xmas (while system administrators are away). Mail servers were knocked offline and some bloggers had their accounts suspended. There is also the attack on Estonia, among many other incidents. The cyber-criminals are just afraid of getting caught, but they have enormous (and scary) potential. The only solution to botnet is probably to make Microsoft Windows obsolete. The operating system is, at present, broken beyond the point of being repairable. We are yet to suffer the consequences of this for years to come because old PCs will continue to be hijacked. They will not have secure software take over them.

Retrieval statistics: 21 queries taking a total of 0.320 seconds • Please report low bandwidth using the feedback form
Original styles created by Ian Main (all acknowledgements) • PHP scripts and styles later modified by Roy Schestowitz • Help yourself to a GPL'd copy
|— Proudly powered by W o r d P r e s s — based on a heavily-hacked version 1.2.1 (Mingus) installation —|