Should You Defragment A SSD?

Martin Brinkmann
Jan 3, 2009
Updated • Dec 2, 2012
Hardware
|
38

So called SSD hard drives are becoming increasingly popular especially in the netbook sector. Solid State Drives have several distinctive advantages like faster access times, lower power usage and being completely silent while running. The main disadvantage that you might notice especially in netbooks is the write speed of those drives which is usually lower than those of conventional hard drives.

With more and more Solid State Drives hitting the streets it is important to understand the differences. Defragmentation describes the process of physically organizing the contents of a hard drive or partition so that the data sectors of each file will be stored close together to reduce load and seek times.

Solid State Drives can access any location on the drive in the same time. This is one of the main advantages over hard drives. This also means that there is no need to defragment a Solid State Drive ever. These drives have actually been designed to write data evenly in all sectors of the drive which the industry is calling wear leveling. Each sector of a Solid State Drive has a limited number of writes before it cannot be overwritten anymore. (this is a theoretical limit which cannot be reached in work environments)

If you did defragment your Solid State Disk you can rest assured that you did not harm it in any way. It is just that this process is not needed and that defragmentation causes lots of write processes which means that the drive will reach its write limits sooner.

No need for defragmentation is therefor another advantage of Solid State Drives.

Advertisement

Tutorials & Tips


Previous Post: «
Next Post: «

Comments

  1. Techie007 said on May 21, 2013 at 9:04 am
    Reply

    Guys, Golem is absolutely right. You all need to learn about how the NTFS filesystem works. HDDs and SSDs do not need defragmenting, they are always seen as one large, contiguous file, no matter what is stored on them. However, the NTFS filesystem does need defragmentation—no matter what the storage media is. The performance penalty of a fragmented NTFS filesystem is greater on a HDD then on a SSD, because the SSD can do random reads very fast, whereas the HDD is very slow with random reads.

    First of all, there is a synchronization latency simply to do an I/O operation, regardless of the amount of data transferred. If the NTFS filesystem is badly fragmented, it will read your files in many tiny I/O reads instead of a few large I/O reads. Each of these “reads” has a synchronization penalty.

    Secondly, each MFT record can hold only so many fragments before an “extended” record has to be created. Fragment the file some more, and another “extended” record will have to be created. To read the file, Windows will have to read each of these into memory, which takes time and memory space. Then the NTFS filesystem driver has to parse the “file extent” data to calculate the actual offsets to the portion of data that was requested. That takes CPU time.

    Thirdly, the concept of “fragmentation” on an SSD with regards to the wear leveling is totally wrong; that should be called “mapping” to keep it separate from the totally unreleated concept of filesystem fragmentation. We are not trying to defragment the SSD (which will span and stripe the data across memory cells regardless), we’re defragmenting the NFTS filesystem, which gets slower and slower the more fragments it has to keep track of.

    I can prove this with a RAM drive folks, and a RAM drive is much faster than a SSD!

  2. Ten98 said on January 6, 2011 at 5:28 pm
    Reply

    Ps, what Golem says is utter nonsense.

    Maybe the defrag helped him by kick-starting his SSD garbage collection routine somehow, or maybe moving data away from a bad block of flash?

    Fragmented files on an SSD makes zero difference to performance.

    1. Anonymous said on March 3, 2011 at 5:47 pm
      Reply

      ‘……………utter nonsense’? Bit harsh Ten98!

      If Golem finds he has a performance increase after defragging, irrespective of the reason for it, I’d have thought that he and many others are going to defrag occassionally .

      The fact that “fragmented files on an /ssd makes zero difference to performance”, in itself is really neither here nor there.

  3. Ten98 said on January 6, 2011 at 5:24 pm
    Reply

    Quite a lot of nonsense in this comment thread.

    No you should not enable defrag on SSD, there is zero performance benefit and you are needlessly subjecting the drive to write cycles.

    MLC flash is rated for 10’000 writes and SLC is rated for 100’000 writes so the “50 years of continuous read/writes” stat is only true on SLC drives, on MLC drives it’s 5 years.

    However none of us will be continuously reading / writing our drives so the real-world longevity of an MLC drive is probably closer to the region of 20 years before we see any performance degradation or bad blocks.

  4. Adam Nofsinger said on June 12, 2010 at 12:49 pm
    Reply
  5. ~qurl said on March 15, 2009 at 3:23 am
    Reply

    I have “Windows Live One Care” that tunes up and defragments automatically on a scheduled basis with no off function. Any ideas how to save my new SSD from premature failure?

  6. Jojo said on January 23, 2009 at 2:04 am
    Reply

    If what Golem says is correct, the it would appear that the problem might be in the way the OS & the SSD Controller interact. I’m guessing here but would think that fragmented files require the controller to issue more read commands (and perhaps more OS interrupts) than a contiguous file would require. That would seem to be where the overhead and delay would most likely occur.

    If the real worry of SSD manufacturers is too much writing to the disk that occurs in a disk defrag session, then maybe the solution would be for a new type of defrag operation.

    Instead of writing back and forth to the SSD device, the data on the SSD could be copied to a magnetic hard drive and defragged in the copy process. Then the data on the hard drive could be copied back to the SSD. That should lower the overall number of write operations that the SSD would accumulate.

  7. Golem said on January 22, 2009 at 11:13 am
    Reply

    Using SSD drive since 3 months, my pc runs 24h and mostly downloading torrents etc..
    I have installed windows on fresh new OCZ v2 ssd drive, everything was running fine but overall disk performance degraded over time, wanted be sure if windows is ok, been checking disk with some low level tools and all was ok, then I just wanted check how its fragmented, and… Disk Defrag did show full red only fragments, I did know all saying its not necessary to do it, especialy producers of these SSD, anyway I did give it a try.
    After like a 2 hours of defrag, i checked all things like internet browsing , file copy etc.
    IT WAS like 90% speed increase !!!!
    DO NOT BELIVE SSD DONT NEED DEFRAGMENT !!
    Ofc defrag makes lots of writes to disk and thats only reason they dont preffer doing it because disk may get broken before warrianty ends ? thats my point.

  8. Dan said on January 14, 2009 at 1:57 pm
    Reply

    @debug
    File fragmentation is not necessary for free space fragmentation. And file defragmentation is not a necessary prerequisite for free space defragmentation.

    Eg. Assume a volume that has, say, 10,000 files, perfectly contiguous/unfragmented with the free space perfectly consolidated. Now delete half of those files randomly. Voila! You have your free space fragmentation. Consolidating this free space needs only file movement, not file defragmentation,if you want to get technical:) The savings in erase/write cycles compared to defrag + free space consolidation would depend on the specific situation on the disk.

    I found the original document that the previous link references by looking around the Diskeeper site.
    http://downloads.diskeeper.com/pdf/HyperFast.pdf
    They seem to have run the test on Apacer 8GB SSDs.

    It doesn’t say anywhere that Hyperfast “itself does random writes”….so don’t know if I am missing something…

  9. debug said on January 12, 2009 at 8:14 pm
    Reply

    I have to disagree with Diskkeeper’s assessment that SSDs “do not experience the same file fragmentation read delays of traditional platter disk”. Sure, they’re still outperform platters on random read, but I’ve actually measured a few and sequential reads are on the order of 10x faster than random on SSD.

    I also think that saying that they don’t need defrag, but need free space consolidation to be a bit of an oxymoron. Doesn’t defragmenting a drive consolidate (defragment) free space? Sure, if you’re not interested in reducing random reads you can defragment the free space without defragmenting the files.

    Aside from disagreeing on Diskkeeper’s random read assessment, I find several other things suspect, not the least of which being that we’re looking at marketing presentations. Their tests are on an 8GB SSD? How old is this? The software claims to improve performance by reducing random writes, yet when the software itself does random writes it somehow performs higher?

    It seems reminiscent of the ‘soft ram’ software marketed in the 90’s, which was generally software that provided a glorified page file. My guess is that HyperFast is some sort of software cache for SSDs, whether it actually caches read/writes or simply reorganizes the way the data is laid out before writing to disk (i.e. bundling 4k writes into 64k writes). I’m not claiming that it doesn’t work as advertised in practice, just that throwing a few marketing slides our way is fairly unconvincing.

  10. Dan said on January 12, 2009 at 12:18 pm
    Reply

    SSD defrag is not required in the conventional sense of the term; what is required seems to be ocasional free space consolidation to facilitate sequential writes rather than random writes. Apparently random writes are the Achilles’ Heel of SSDs.
    http://www.diskeeperblog.com/archives/2008/12/hyperfast_is_al.html
    BTW, from the blog link, Hyperfast is a part of Diskeeper 2009, so it’s not exclusive to Apacer. But I think the Apacer drives may have these algorithms embedded in their firmware..not sure though. I think it’s a great feature that Diskeeper 2009 can handle magnetic as well as SS drives as installed.

  11. RJ said on January 5, 2009 at 8:32 pm
    Reply

    Diskeeper just came out with a product to address the main disadvantage of SSDs. The product is called HyperFast and it is considered an “optimizer” as opposed to a defrag, designed to consolidate free space, improve writing capabilities and preemptively optimize the storage device at the OS level.
    http://www.diskeeper.com/hyperfast/index.aspx

    1. Martin said on January 5, 2009 at 9:46 pm
      Reply

      RJ now that’s interesting. Looks like it will be exclusively licensed to Apacer. Hope others will provide similar optimizers if this proves to be effective in a work environment.

  12. TK said on January 5, 2009 at 11:33 am
    Reply

    There is one so far unmentioned advantage to defraging any file system, that is lost file recovery due to drive data structure corruption. If all the sectors in the lost files are contiguous then one only needs the start sector and length to recover the lost file. If the SSD built in controller redistributes writes to its SSD address space then defragging should not affect SSD access times as it is likely that what apears contiguous to the OS is scattered arround the SSD address space. If there is still a speed improvement then it is the PC architecture and OS overheads making random read/writes slower than block read/writes.

  13. Max said on January 5, 2009 at 2:26 am
    Reply

    debug: Thanks. ^_^

  14. debug said on January 4, 2009 at 10:05 pm
    Reply

    The wear leveling is done by the controller, basically it has a pool of flash to work with. The controller also manages how the system sees the disk, but intervenes so that the actual location of items in flash is transparent. In other words, the system still sees blocks, sectors, etc, but they have no bearing on where they actually are in flash.

    Interestingly enough, defragmentation does still have a fairly profound effect on SSD performance, and I imagine it should still be done on occasion. Sequential reads can be up to 6x faster than random reads, and sequential writes can be faster by 20x or more (http://learnitwithme.com/?p=106). This may have to do with the way the flash controller manages the pool of flash, the controller seeing an attempt to write sequential blocks on the system side chooses a single location for that data, whereas random blocks may have to have several locations selected and remembered. Just a theory.

  15. Max said on January 4, 2009 at 8:54 am
    Reply

    Thanks. ^_^

  16. Max said on January 4, 2009 at 12:30 am
    Reply

    Sorry. May I quote:

    “These drives have actually been designed to write data evenly in all sectors of the drive which the industry is calling wear leveling.”

    So, I mean would defragmentation process (or in the case of my concern, secure erase) be intervened by wear leveling technology and forced to write onto other location of the SSD?

    1. Martin said on January 4, 2009 at 2:19 am
      Reply

      Max as I said wear leveling is not a problem when deleting data because the erase applications specify the sectors where data is deleted.

  17. Martin said on January 3, 2009 at 5:25 pm
    Reply

    Max those are two different processes. When you erase the empty space on the disk you explicitly tell the program to write data to each empty space sector on the hard drive.

  18. Martin said on January 3, 2009 at 5:21 pm
    Reply

    Me you are right about defragmentation and it was exactly the same that I wanted to say in the article. Bad wording so to say which has been corrected

  19. me said on January 3, 2009 at 4:15 pm
    Reply

    Defragmentation is NOT the process of moving files closer together. It is the process of arranging all of a file’s sectors so that they are contiguous and in order.

    And yes, it still needs to be done on any type of drive if you wish to be able to easily recover the contents of said files due to crappy OS file system management due to crappy OS everything-else management that causes crashes, eg, daily Windows usage.

    If you have ever needed to recover an important file that got mangled in such a situation then you’ll soon realize the need to defragment routinely.

  20. Max said on January 3, 2009 at 3:46 pm
    Reply

    How does wear leveling affect “secure erase” (that perform multiple writes to the same location on a HDD)?

  21. Dotan Cohen said on January 3, 2009 at 2:48 pm
    Reply

    Of course one can reach the write limit of solid state disks. Try running a mail server or any other system which deals with many write cycles.

    1. Martin said on January 3, 2009 at 2:53 pm
      Reply

      Dotan yes you reach the limit in about 50 years on a 64 GB drive and continuous never ending writes to the disk. Check this article http://www.storagesearch.com/ssdmyths-endurance.html

  22. Jojo said on January 3, 2009 at 2:16 pm
    Reply

    Which makes me curious:

    Are the SSD chips the same quality, better or worse than regular memory (RAM) chips?

    Since RAM is used much more than an SSD would be, would it wear out quicker?

    1. Martin said on January 3, 2009 at 2:18 pm
      Reply

      Jojo take a look at this excellent article: http://www.storagesearch.com/ssd-ram-v-flash.html

  23. Thinker said on January 3, 2009 at 11:56 am
    Reply

    I have never noticed, that defragmentation helps on HDD either. Kind of a myth. Windows can manage itself data location.
    Anyway, I got 2xSSD on RAID 0 and I don’t see a big difference comparing to my older 2xHDD. There is a test, that show’s I can got 230MB/s transfer rate, but in real world, even with 100x times better access time I got maybe 50% more performance.

    1. Sim Jin Yi said on April 9, 2011 at 12:10 am
      Reply

      Thanks for your good post . It is very helpful .

    2. ez said on December 19, 2010 at 7:06 am
      Reply

      Regarding defragmenting not helping on HDD and it being a myth… That actually couldn’t be more untrue…

      If you play any high graphics online games (MMORPG types) you would notice a difference… More lag, more stutters…

      I have seen it first hand… But if you are surfing the web and writing docs, you probably won’t notice…

Leave a Reply

Check the box to consent to your data being stored in line with the guidelines set out in our privacy policy

We love comments and welcome thoughtful and civilized discussion. Rudeness and personal attacks will not be tolerated. Please stay on-topic.
Please note that your comment may not appear immediately after you post it.