Introduction About Site Map

XML
RSS 2 Feed RSS 2 Feed
Navigation

Main Page | Blog Index

Archive for April, 2012

TechBytes Episode 67: Nokia Back to Linux

TechBytes

Direct download as Ogg (0:50:56, 10.9 MB) | High-quality MP3 (17.7 MB) | Low-quality MP3 (5.8 MB)

Summary: The first episode in a long time discusses changes in form factors and the stronger points of Linux

Today we spoke about Nokia, Linux, proprietary software on GNU/Linux, and finally something about LibreOffice. We didn’t prepare topics in advance, but it should be an entertaining episode nonetheless. The show closes with “Ceiling Of Plankton” (a song by Givers).

We hope you will join us for future shows and consider subscribing to the show via the RSS feed. You can also visit our archives for past shows. If you have an Identi.ca account, consider subscribing to TechBytes in order to keep up to date.

As embedded (HTML5):

(more…)

Roy and Amy

Me and Amy, my goddaughter.

Sent to me yesterday by her dear mother.

BT Downtime for a Day, a Week, Maybe More

BT

When BT goes down, it goes down big time. A year ago BT erroneously disconnected my line, which only took 3 weeks to restore [1, 2, 3]. Great job! This was rectified after I had contacted BT’s head of support and got someone to come around and fix it, then issue compensation. But even since then, it has been a bumpy ride too. Previously, it was down for a whole day, making any uptime statistics quite embarrassing for this giant telecom company (the customers were given misinformation regarding estimated time for restoration of service). This time not only does broadband go down (for a whole part of the neighbourhood) but the phone system too goes down, affecting a large area as well (several families and family businesses). Promises that the connection will have been restored by 5 PM (it went down a 8:15 AM) were more harmful than useful because they delay people’s reach for contingencies. It has been a day and a half and I still have neither a phone line nor Internet, which are vital to my jobs (I work remotely from home). Once again BT is very unhelpful, even on the phone. Ethnicity at the call centre is not a problem at all, but there is no personal touch, no appeal to authority, no way of actually making things happen; everything is very procedural. It’s like talking to a book. I spent about an hour in total talking to reps on the phone, I also left contact details for updates, but I never heard anything back. It’s like talking to a wall while being discouraged from talking any further (they say there is no need for me to call, which I guess makes sense since I don’t even have a working phone so I resort to using booths in the streets). It’s all very, very frustrating, with serious effects on jobs, personal life, and so on.

To have a contingency in such a case may help, but when time of arrival/restoration of service is unknown, then it sometimes makes sense to just go elsewhere for a while. With mobile broadband (dongle), once it’s used there is a bandwidth cap and a 30-day usage timespan, so even if it’s only used throughout one hour of downtime, it can be extremely expensive (one dongle can cost 25 pounds, 10 of which for credit).

The bottom line is, BT downtimes are extremely costly, they can be very lengthy, speaking to an actual person is a challenge and when the line too is down one needs to rely on a mobile phone, in which case the 0800 numbers are no longer free. I use booths instead, as waiting in the queues can make calls very long (like half an hour each, making the cost prohibitive on cellphones).

BT has been OK for several months (no downtime), but when it fails, it does damage that is quite serious. When the exchange falls down, as in this case, it doesn’t even matter what ISP one uses for Internet because BT has a monopoly on the line. It can take days for BT to address the problem, but why hurry? When there’s a monopoly on the line, who’s to compete on service quality?

Nothing is perfect and equipment sometimes gets damaged, but it’s unclear to me why it should take so long to fix, especially when a lot of people are affected. The telecommunication infrastructure these days is really vital to business and to personal lives, it’s not a mere luxury and a matter of entertainment.

The BT call centre says people will have addressed the problem only by Friday, but why the delay? When pressured to say if work is already done on the exchange, they said yes, but the building’s manager has seen not a single person from BT. It seems possible that BT reps say stuff just to get people off the line, making them optimistic in vain.

My phone line is down along with the Net. My neighbours have the exact same problem, but some are not even with BT as an ISP. This needs to be addressed more quickly. By Australian law, a company like BT would be ordered to also issue compensation in this case.

Road rage in Brazil

Looking at Phones

Looking at phones

Gladys Knight & The Pips – You And Me Against The World

SUNDAY brings an old favourite.

“You and Me Against the World” is a song written by Kenny Ascher and Paul Williams, recorded by Helen Reddy for her 1974 album Love Song for Jeffrey.” –Wikipedia

From GMDS/FMM to Canny Edge Detection

Using a more brute force approach which takes into account a broader stochastic process, performance seems to have improved to the point where for 50 pairs (100 images) there are just 2 mistakes using the FMM-based approach and 1 using the triangles counting approach. This seems to have some real potential, even though it is slow for the time being (partly because 4 methods are being tested at the same time, including two GMDS-based approaches).

I then restarted the experiment with 4 times more points, 84 images, and I’ll run it for longer. At the start there were no mistakes, but it’s slow. The purpose of this long experiment to see if length of exploration and use of numerous methods at once can yield a better comparator. In the following diagram, red indicates wrong classification. Since similarity is measured in 3 different ways, there is room for classification based on a majority, in which case only one mistake is made. It’s the 9th comparison of the true pairs, which is shown as well. The mouth and arguably whole lower part of the face is a bit twisted; the FMM-based approach got it right, but the other two failed. Previously, when the process was faster, the results were actually better.

Scatter alternations were made to investigate the potential of yet more brute force. I reran the experiments as before but with different parameters that scatter the random sample closer to the centre of the face and this eliminated the one mistake made before (9th true pairs). The changes resulted in one single result which was not correct: the 15th image in the other gallery. Whereas it was previously intuitive to find a fix for one mistake. when this fix introduces a mistake elsewhere it’s time to think of an approach change. One solution might be to increase the scatter sample/range, but it is already very slow as it is.

Edge detection was then explored as another classifier facilitator.

In order to address the recurring issue where misclassifications are caused by improper account for details versus topology, another approach is going to be implemented and added to the stack of methods already in use. The approach will use edge detection near anatomically distinct features and then perform measurements based on the output. As the image below shows, GMDS is still inclined to accept false pairs as though they are matching sometimes and this weakens the GMDS “best fit” approach.

I have implemented a 3-D classification method based on filters and Canny edge detector, essentially measuring distances on the surface — distances between edges. So far, based on 20 or so comparisons, there are no errors. But ultimately, this can be used as one classifier among several.

The thing is about Canny is, if we do that, we might as try using the set

Laplacian(I) - g(I)*div(g(I)/|g(I)|) = 0

where g(I) = grad (I) which is the Haralick part of the “Canny” edge detector, i.e. without the hysteresis integration.

I decided to look into changing it. Currently I use:

    % Magic numbers
    PercentOfPixelsNotEdges = .7; % Used for selecting thresholds
    ThresholdRatio = .4;          % Low thresh is this fraction of the high.
    
    % Calculate gradients using a derivative of Gaussian filter 
    [dx, dy] = smoothGradient(a, sigma);
    
    % Calculate Magnitude of Gradient
    magGrad = hypot(dx, dy);
    
    % Normalize for threshold selection
    magmax = max(magGrad(:));
    if magmax > 0
        magGrad = magGrad / magmax;
    end
    
    % Determine Hysteresis Thresholds
    [lowThresh, highThresh] = selectThresholds(thresh, magGrad, PercentOfPixelsNotEdges, ThresholdRatio, mfilename);
    
    % Perform Non-Maximum Suppression Thining and Hysteresis Thresholding of Edge
    % Strength
    e = thinAndThreshold(e, dx, dy, magGrad, lowThresh, highThresh);
    thresh = [lowThresh highThresh];

There is a lot that we can do with edges to complement the FMM-based classifiers (triangles count, GMDS, others), but moreover, I am thinking of placing markers on edges/corners (derived from range images) and then calculating geodesics between those. Right now it is all Euclidean, without account for spatial properties like curves in the vicinity. By choosing many points and repeating the process everything slows down, but previous experiments show this to bear potential. None of it is multi-scale just yet.

What we do with the edges is also risky, as the edges strongly depend on the pose estimation and expression. In pose-corrected pairs (post-ICP) I measure distances between face edges and nose edges. Other parts are too blurry and don’t give sharp enough an edge which is also resistant to expression. The nose is also surrounded by a rigid surface (pose-agnostic for the most part).

Problematic cases still exist nonetheless and I am trying to find ways to get past them. There are clearly problematic cases, such as this first occurrence in the 25th pair, where edge detection is not being consistent enough to make these distances unambiguous. In such cases, Euclidean measures — just like their geodesic counterparts — are likely to fail, incorrectly claiming the noses to be of different people.

A modified edge detection-based mechanism is now in place so as to serve as another classifier. It does fail as a classifier when edge detection fails, as shown in the image.

Retrieval statistics: 18 queries taking a total of 0.183 seconds • Please report low bandwidth using the feedback form
Original styles created by Ian Main (all acknowledgements) • PHP scripts and styles later modified by Roy Schestowitz • Help yourself to a GPL'd copy
|— Proudly powered by W o r d P r e s s — based on a heavily-hacked version 1.2.1 (Mingus) installation —|