Does Linux need to be defrag'd? - gHacks Tech News

Does Linux need to be defrag'd?

I get this question a lot. And generally speaking the answer is a resounding "no". I have gone nearly twelve years using Linux without defragging a drive and I've never noticed a slowdown on a system. But just because you don't need to doesn't mean you can't. I have found  it possible to actually check the fragmentation of a linux mount point and defragment that mount point.

In this article I will discuss this very issue and then I will show you how you can test the fragmentation of a mount point on your Linux drive and then defragment that mount point.

What it isn't necessary

This is the question I get nearly every time I tell a user that it is not necessary to defrag a Linux drive. The first and foremost reason you do not have to defrag a system is that the majority of files on a system need super-user permission to move. Oh sure you can move anything you want around in your ~/ directory. But try to move anything in /usr/bin, /opt, /sbin or any other directory outside of ~/ (without super user permission) and see how far you get. What this means is that during general, every-day usage the vast majority of files are not being moved around on your system. The only files you really need concern yourself with are the ones in your home directory - and those files have little to nothing to do with the performance of your machine.

Another difference is that some other operating systems try their best to place files as close the front of the drive as they can - without gaps. When files get moved around, these gaps appear, making it hard for the drive to be read. The Linux operating system does not do this. Instead the system starts in the center and places files not in a random fashion, but doesn't concern itself with placing files next to one another in the start of the drive. So when spaces are created it's not a big deal because the system is used to those spaces. The only time you will notice defragmentation on a Linux drive is when the drive is over 95% full. At that point the seeming "randomness" of placement will have caught up and the spaces between files might not allow for the addition of more files.

So when this it possible to defragment? Yes it is.

Fragmentation check

I discovered a very handy Perl script that will allow you to check a mount point for fragmentation. The script can be found here in the discussion. Copy that code into a file called fragmentation.pland give that file executable permissions with the command chmod u+x Now issue the command:

sudo ./ /home/USER

Where USER is the user who's home directory you want to check.

You will most likely be surprised at how how low the number is. It will give you a report like:

1.30108895488615% non contiguous files, 1.01067741479282 average fragments.


Now, say you do want to defragment that home directory. You can do so with this handy piece of code. Save that piece of code in a file called defrag.pland give it executable permissions with the command chmod u+x Now issue the command:

sudo ./ /home/USER

Where USER is the user who's home directory you want to defragment.

Now this task can take some time, depending upon the size of your ~/ and how much is in that directory. But once it is done, issue the command again and I bet you will find the results positive.

Final thoughts

Although you will most likely never have to run this, it is nice to know that it is possible. The Linux system rarely gets fragmented to the point you will ever notice the slightest least not until that drive is nearly full. And considering the cost to size ratio of today's drives, the possibility of them filling up is slim. And if they do, you'll probably just go out and buy another drive.

  • We need your help

    Advertising revenue is falling fast across the Internet, and independently-run sites like Ghacks are hit hardest by it. The advertising model in its current form is coming to an end, and we have to find other ways to continue operating this site.

    We are committed to keeping our content free and independent, which means no paywalls, no sponsored posts, no annoying ad formats (video ads) or subscription fees.

    If you like our content, and would like to help, please consider making a contribution:


    1. chessboxing said on June 7, 2010 at 3:07 am

      Nice to have cleared that up.

    2. Tony said on June 7, 2010 at 3:36 am

      Nice visual explanation of how Linux avoids fragmenting here

    3. Mr. X said on June 7, 2010 at 4:42 am

      Course if you use XFS(which you should) just do a ‘sudo xfs_fsr -v’ … Boop…done.

    4. Me said on June 7, 2010 at 5:04 am

      Wow that brain fart of an article was chock full of fail and ignorance.

    5. SubgeniusD said on June 7, 2010 at 7:22 am


      Wow that brain fart of a comment was chock full of fail and ignorance.

    6. Saad Ahmed said on June 7, 2010 at 8:01 am

      Nice, it will definitely improve disk I/O performance.

    7. Dotan Cohen said on June 7, 2010 at 10:31 am

      The ArchLinux website says that JFS does need to be defragmented:

      That page lists these sources:[email protected]/msg00573.html

      Of course, your comments regarding user privileges and home are valid, but don’t forget about the software that is changed when updating the system.

    8. Mike J said on June 8, 2010 at 3:59 pm

      I was just reading an article in one of those picky British mags finding that defragging is of no real advantage in Windows either, speedwise.

      1. Scott L said on June 9, 2010 at 6:08 am

        One big HUGE reason to defragment that everyone seems to miss is that if your drive screws up and you loose your directory structure or (accidentally delete )… it becomes so much nicer to recover your very valuable photo collection if the files are contiguous!

        Thanks for file!

    9. inameiname said on June 10, 2010 at 9:31 am

      The version of the ‘defrag’ script that is linked here is outdated, 0.6, and using it, I noticed if there isn’t enough space to copy a file (say only 500mb left on a flash drive and I have files larger than a gig on there) it messes up that gig file, and makes it 500mb to compensate. I did find that the latest version of the script, 0.8, seems to have fixed this flaw. Just a warning…

    10. hiska said on July 3, 2011 at 4:35 am

      Why is the filename “” It’s a sh, not a Perl script. Also, to obtain the latest version of the script (0.08 at this time), go to

    11. Gechurch said on November 15, 2011 at 2:58 am

      What an interesting article. I’d always assumed that Linux, being created by geeks, would have some really clever techniques going on to avoid fragmentation and to keep I/O speeds fast. It’s quite the opposite – the files are stored on the slowest part of the disk, and are deliberately fragmented to start with. That explains why you see no performance degredation over time Jack – if you start with the worst possible scenario it’s hard for the situation to worsen.

    Leave a Reply