About this captureCOLLECTED BY Organization: Alexa Crawls Starting in 1996, Alexa Internet has been donating their crawl data to the Internet Archive. Flowing in every day, these data are added to the Wayback Machine after an embargo period. Collection: Alexa Crawls DE Crawl data donated by Alexa Internet. This data is currently not publicly accessible TIMESTAMPSGo to the first, previous, next, last section, table of contents.
For over 99% of files, fragmentation is no worse than in an uncompressedfilesystem.(3) As the file isbeing written, we compress whenever we reach the end of a cluster. Whenwe start allocating blocks for the next cluster, we try to allocate themright next to the blocks of the previous, compressed, cluster. (Don’tworry, the ext2 block allocation strategy does the right thingconcerning holes.)
It is only when you write over a cluster other than the last in the filethat compression can cause extra fragmentation. This is because the newdata might compress to a different number of blocks than previously, soyou either get a gap in allocation (if the new data takes up fewerblocks) or you get a block that has to be written out of sequence (ifthe new data takes up more blocks than previously allocated to thecluster). The only example of this sort of file that I can think of islarge (more than one cluster) database files. (`updatedb’ doesn’tcount, because (I believe) it gets truncated to zero length before being written over.)
As someone else (email@example.com) pointed out, compression canactually reduce fragmentation in some cases, simply because we don’t fillup the disk as quickly.
Nevertheless, people who are short on disk space (as many e2compr usersare) tend to have a high turnover of files (deleting files to make wayfor new files, which have to be written in the cracks occupied by thefiles just deleted), which causes high fragmentation.See section Can I still use a defragmenter?, for comments on using a defragmenter.
Go to the first, previous, next, last section, table of contents.