The section on implementation provides details which are implementation-specific rather than generalised to a method (still being actively pursued but not finalised). The graphical user interface is also explained therein.
The graphical user interface required an investment of time which will ease later operation and make anyone with interest in the tools more oblivious to the underlying code. The code is annotated and divided sensibly nonetheless.
The following Candès et al. text, which comes from a technical paper (not peer reviewed, just an informal 40-page manuscript about ongoing work from the end of 2009, December 2009 to be precise), relates to a problem which degrades the quality of our current results due to ICP failures. In one of the latest experiments, for example, decomposition was very obviously broken and it could easily be seen when the graphs detailing the model were shown, after hours of processing. Plots and ROC curves were produced nonetheless, actually showing that the major error was not too fatal for performance but noticeably problematic still. Any images that move out of line can dominate the signal and therefore spoil the model, which otherwise can give a recognition rate higher than 90% (for the whole NIST database, assuming further tweaking). Technical Report No. 2009-13 addresses the practical application relating to faces, namely lighting imbalance mitigation by low-rank approximation, L.
On Robust PCA: PCA is arguably the most widely used statistical tool for data analysis and dimensionality reduction today. However, its brittleness with respect to grossly corrupted observations often puts its validity in jeopardy - a single grossly corrupted entry in M could render the estimated L arbitrarily far from the true L0. Unfortunately, gross errors are now ubiquitous in modern applications such as image processing, web data analysis, and bioinformatics, where some measurements may be arbitrarily corrupted (due to occlusions, malicious tampering, or sensor failures) or simply irrelevant to the low-dimensional structure we seek to identify.
Addressing 2-D face recognition: It is well known that images of a convex, Lambertian surface under varying illuminations span a low-dimensional subspace. This fact has been a main reason why low-dimensional models are mostly effective for imagery data. In particular, images of a human's face can be well-approximated by a low-dimensional subspace. Being able to correctly retrieve this subspace is crucial in many applications such as face recognition and alignment. However, realistic face images often suffer from self-shadowing, specularities, or saturations in brightness, which make this a difficult task and subsequently compromise the recognition performance.
Surely it does refer to work that can be generalised to 3-D, but pursuit for literature on that subject carries on. If this was not done before, perhaps it is worth exploring, demonstrating in particular an advantage over a simpler approach it's derived from (SVD) - something we already have implemented anyway, and can refine further to a satisfactory level.