Miscellaneous PC Issues & Tips

by ProdigalSon 22 Replies latest social entertainment

  • EntirelyPossible
    EntirelyPossible

    Yeah, that site was such adorable fluff. It still didn't answer my specific question.

  • Anony Mous
    Anony Mous

    Defragging is useless in Linux because the file systems used are different and better suited for saving and retrieving data. Windows still uses NTFS, a FAT/HPFS derivative (which has been around since the 80's) and is pretty linear and dumb in how it saves stuff. Two consecutive writes to two separate files will thus have the data for the two files interleaved over a large area. Most Linux filesystems don't use consecutive blocks for several files but leave a bit of space after closing a file thus even if you append a file, it's still being stored in the same area as the original file thus defragmentation doesn't happen quite as quickly.

    Also, more modern file systems make intelligent use of caches (they're adaptive) and prefetches or reads more data than requested so the effects of fragmented data are significantly reduced. Off course if you append a lot of data, it still gets broken up but to read those large files it doesn't really matter whether or not it's broken up (streaming consecutive data is so fast the rest of the computer can't keep up so there's a lot of prefetching that can be done). Also, the latest intelligent file systems (ZFS, EXT4 etc.) uses an "intent log" to save small pieces of data after which it re-orders and re-writes the data before committing the data to disk on-the-fly and in some cases can use inline compression (because CPU time is inexpensive compared to disk time) to reduce the amount of data written to disk and thus reduce the time to read/write data.

    Defragmentation is also not necessary (on any file system) if you use Solid-State Drives (SSD). It actually hurts to defrag those disks. That is because there is no mechanical arm that needs to be moved - everything is retrieved at the same speed.

  • EntirelyPossible
    EntirelyPossible

    So, Linux file systems intentionally leave extra space after each file? That seems ineffecient, space-wise. And how much is a "bit" of space" A true bit? a byte? What if the file double in size? How does the bit of space and modern adaptive caching play into that? Am I to understand that my PC running linux takes part of my memory on an adaptive basis and reserves it for the file system? What if I need it for application use? Can I get it back?

    Other than the "bit of space at the end of each file" part, none of that told me why Linux file systems don't need defragmenting, and that part was woefully incomplete.

    I am asking in all sincerity. What happens on a busy Linux file system to prevent fragmentation? A bit of space at the end of each file is not by any means sufficient to prevent fragmentation.

  • jay88
    jay88

    EP: Apparently this is not you first rodeo with this topic(Mr.Gates)

    http://www.fact-reviews.com/defrag/Linux.aspx

    Tell me what you think.

    FOF

  • EntirelyPossible
    EntirelyPossible

    First, why did you say Mr. Gates?

    Second, the article was a succint description of the two methods of writing files into a file system, but it also made a LOT of assumptions, didn't provide any boundary testing data (when it did talk about boundaries, it didn't provide any supporting information), it talked about large files but didn't define what large or small meant, it completely ingnored the downstream performance effects of having to rearrange non-fragmented files to allow new large files to be written when no contiguous free space is written to the file system, etc.

    It was amateurish.

    I don't work for Microsoft (although I used to, and dealt extensively with file systems, among other thing). I just taught a class on distributed global file systems two weeks ago, including what large vs. small inodes do for you, cache coherency algorithms, data read and write algorithms with multiple nodes acessing a file system, reading inode and metadata into cache using large translation look aside buffers, ensuring protection of data and metadata across clustered nodes, methods of distributing files across multiples nodes and drives in storage arrays without the use of a distributed lock manager, etc.

    All of this runs on a file system in linux, and fragmentation is important. It just amuses me when I hear "Linux never needs defragmenting".

    I might know a little something about file systems :)

  • jay88
    jay88

    @EP

    I did not say linux was exempt from needing to be defrag'ed I said you don't have to be concerned with it as much, even more so with basic desktop usage.

    What you just explained, is large scale, making the management of fragmentation that more important.

    ..........

    I was teasing with the Gates thing, no doubt

    Are you a Redhat instructor?

  • EntirelyPossible
    EntirelyPossible

    Are you a Redhat instructor?

    Oh god no. I work for Dell :)

    And yeah, I tend to be more focused on enterprise applications, large scale compute clusters, distributed namespaces, analytical applications, transactional databases, that sort of thing.

    yeah, you did say "nearly as much" when talking about linux. fair enough. it just amuses me when people say that, particularly given my experience. Usually there are multiple ways to implement file systems, each with it's own peculiarities, pros and cons. granted, my experience is far beyond the norm, far far far beyond.

    I used to be a smart engineer, now I spend my time talking about theory, the future of file systems, roadmaps, deduplication, the value of data and tiering it between storage platforms, etc.

  • jay88
    jay88

    If it is not too much do you mind spilling the beans a little on the future of filesystems?

  • EntirelyPossible
    EntirelyPossible

    I actually gave that presentation to a customer earlier today. He, of course, was under an NDA :)

    If there is something specifically you would like to know, I can speak to it in generalities. I really can't get specific on a public forum about something specific to my employer.

    Having said that, if you are asking about enterprise file systems, I would suggest you thing about policy based data tiering, thin provisioning, wide data striping, that sort of thing.

  • Anony Mous
    Anony Mous

    Not only Linux but several other file systems don't need defragging (most enterprise systems). I'm talking about several generations of file systems. FAT and NTFS are what you could call 2nd (16 bit, no access control, later on directory structures) and 3rd generation file systems (32 bit, access control etc.) but still stems from the 80's and 90's.

    Ext3 is a 4th generation file system where people started using journaling (keeping track of changes that weren't finished yet so it can roll it back when the system crashes instead of doing a disk check to find loose ends, databases and b-trees to keep file information) which got much better performance for really large file systems. Ext4 and ZFS are current generation file systems (128 bit) that scale to several Exabytes if not more and use all the things we have learned over the years and improves them even further.

    As said, the little bit of space is in the simplest of advanced file systems (ext3) a block (off course depending on formatting but usually 4 kbyte). It would have been 'wasteful' for the 1980's where a hard drive was 10MB and cost several hundreds but these days, do you really think you're missing 4 kbyte/file (~40MB out of your average 1TB disk)? There is always a trade-off to be made between performance and being efficient, there are file systems specifically meant for embedded systems (where even a traditional allocation table would be too much). The reasoning of the creators was that, ok, if you have small changes and additions to files, you shouldn't be penalized for that in read performance by having the disk thrashing all over the place to read a single file. On the other hand, for large files, additions would be either not existent or be in itself large additions so it is not really a problem to have a large file divided in two pieces since that requires just 1 interruption. A program can also 'state' how much space a file needs and thus an area on the disk is reserved as a single large block regardless of whether it's written to or not.

    FAT and NTFS systems have a similar problem where they can only allocate blocks for a single file, thus if you have a 5kbyte file and your formatted at 4kbyte/block, you would break up the file and write 4kb + 1kb but actually use the space of 8kb but since FAT can't append data it would for the next 5kbyte use yet another 2 blocks somewhere else on the disk and use now 16kb for your 10kb file. Defragmenting brings those blocks together. Fragmentation still happens in ext3 but it is way less severe and does not significantly impact performance.

    The intent log in ext4 and zfs (and some other recent file systems) works by writing to a specific area on disk (this is oversimplified, the technical details differ per implementation and I don't want to look them up right now) or in some cases, solid state disks or battery backed-up memory the small changes and then later (when it gets flushed, the file gets closed or when the system is idle) re-writing the entire file in order to a designated area of the disk. Thus your file is no matter how fast or slow you write to it (ie. if you're downloading pieces of the internet) always rewritten in the same area of the disk. ZFS actually can use variable block sizes (up to 128kb) so you're not 'wasting' a lot of space and you can also read large files much faster (as you give 1 command and you get up to 128kb instead of 32 commands getting 4kb each with fixed block sizes).

    The adaptive caching since ext3, 4 etc. and limited in Windows 7 too is indeed using system memory (and in case of ZFS you can also use Solid-State Drives as a secondary cache) and why not - you have what 2-4GB in your machine, your current applications only need ~500MB, why not read out the rest of your files while the system is busy doing something else so when you need it, you don't need to wait on the disk to thrash around, you just read from memory thereby negating any fragmentation issues as the reads happen when the system is processing previous data. If the memory is under pressure, it will evict the least used pieces first so you always have the most used pieces in memory.

    Also intelligent systems like ZFS and certain RAID controllers also (ab)uses the on-disk buffers - they'll command the hard drive to fetch actually a little more data than necessary (the adjacent blocks) but won't actually read them so that if the system decides to ask for them anyway, they can be read from the disk buffer but on the other hand, the bandwidth is not wasted trying to read and process them.

    Microsoft has been attempting to come up with an answer (WinFS has been promised since Windows 2000 but is still not working in Windows 8) for the fragmentation issue of NTFS/FAT and even Apple has made advancements in it's HFS+ system. SSD's would be the easiest answer (fragmentation doesn't matter) but the performance would still be atrocious if you go into the details. That's why nobody uses Windows as a storage platform.

    In the above examples about caching etc. you have to think - how much data do you actually use. I have ~50TB at work and the home directories of ~20 simultaneous users and ~500GB of read cache. There is only about 200GB of user data that gets most commonly read, yeah we have a lot of stuff but we never, ever read it. How many times do you look at all your pictures or listen to all your music?

Share this

Google+
Pinterest
Reddit