Roy Schestowitz wrote:
> __/ [Roy Culley] on Wednesday 08 February 2006 18:52 \__
>
> > I have XP Pro installed under vmware. Today vmware crashed. When I
> > tried to restart it it failed because of a lockfile. I remove the
> > lockfile and then I get a popup saying it can't write to disk. I
> > go to edit config and select the disk. Here's what it showed:
> >
> > Current size: 8.5GB
> > Maximum size: 8.5GB
> > System free: 0
> >
> > So I click the Defragment button and then it shows:
> >
> > Current size: 8.5GB
> > Maximum size: 8.5GB
> > System free: 6.8GB
> >
> > 6.8GB freed up by defragmenting! MS SW is just mickey mouse crap. I've
> > admin'd *nix systems for over 15 years and never had a problem due to
> > filesystem fragmentation. Even heavily used read / write filesystems
> > have at most a few percent fragmentation after years of use.
>
> I remember defragmenting my hard-drive every night to earn some space. I had
> only 170 MB at the time and was running MS-DOS. Evidently, filesystems from
> Microsoft still suffer from deficiencies that have not been addressed.
>
> Bringing to mind the number of hours I spent defrag'ing.exe is annoying to
> say the least. Before I started using Linux, I thought it was a "necessary
> evil". I now know it was just "evil".
>
> The only maintenance I have done on this SuSE 8.1 box for the past two years
> is emptying ~/.thumbnails (about twice) and /tmp (only once, last year).
>
-> "I remember defragmenting my hard-drive every night to earn some
space...."
I must be missing something here but how does defragging a hard-drive
create disk space? The amount of disk space files take up doesn't
change. The only thing that changes is the physical layout of the
files. Defragging makes the files contiguous... it doesn't make them
smaller.
Assume that a file is 3200 bytes long. Using a block size of 512 bytes
this file would take 6.25 blocks to store the file which in reality
means 7 blocks since few file-systems use subsegment allocation.
On a fragmented file system the 7 blocks would be all over the disk.
Reading this file is slow because the (mechanical) disk needs to move
to several locations in order to read the entire contents. Once
defragmented, the file would still require 7 blocks except they would
be contiguous. That is, they would be stored in consecutive sectors.
Also worth noting that only files that are written can possibly become
fragmented. A defragmented file does not become fragmented when it's
simply being read from. I typically use one drive for "data" and
another drive for the OS. Assuming that my "system files" are
defragmented to begin with, there really isn't any way for them to
become fragmented over time. (Unless one updates the OS of course.)
|
|