The framework I work with is mostly complete for the setting up of video streams, drawing upon the canvas that contains them, and invoking commonly used computer-driven filters. These are incorporated as examples into the program and they include posterise, Canny edge detection, Sobel edge detection, histograms, and so on. In practice, any bit of filtering considerably slows down the process and thus becomes detrimental to the whole pipeline (more on that later); so after much shallow experimentation I ended up dealing with just the raw frames as they come; edge detection can be used to shrink the image by annulling pixels associated with the sky, based on horizon detection (it has expected edge-based characteristics). The implemented features were exploratory at first and more practical later, as getting familiar with the classes and methods can take experience and practice. Compiling and installing the modified programs on a device can take a lot of time too; as such, development cycles are slowed down by complexity associated with the fact that programming is done away from the target device, then passed over a USB 2.0 interface.