Some time ago I asked about limiting scp and it turned out that I had a
version of scp that did not support the -l option. I have now resolved
that issue, but it turns out that the CPU is crippled just as badly when
-l is forced, which means that I cannot work properly (unless I am away).
Either way, I have yet another issue, which is similar in nature. I occa-
sionally, almost on a daily basis, copy (cp) large files from one parti-
tion to another. If the Konqueror interface is used (copy & paste), the
speed of the transfer makes interaction quite graceful regardless of the
transfer. However, since I scripted the cp command and cp is very CPU-
greedy (as anything else that relies on the round-robin scheduling in
*nix), even 'nice' does not make the environment workable. If I leave an
application alone (idle) for a second, cp 'steals' all the resources it
can. I am approaching almost a halt, in particular at I/O-level (applica-
tions run fine) for several minutes. I have looked at the man pages for
cp, but couldn't immediately find a remedy.
So, my question is: can cp have limits imposed? Will these limits, as in
the case of scp (see first paragraph) affect the load? It sure seems as if
KDE, being at a higher level and using cp merely as a service, is able to
achieve that.
Many thanks in advance,
Roy
|
|