Page 1 of 1

File Fragmentation

Posted: 2009-02-18 18:40
by GateKeeper2000
Hi,

I have been using FileZilla Client and Server for a while to move fairly large files on my home network but I have been finding that the resulting downloads are heavily fragmented (350Mb file ends up in 350 chunks for example!) and require significant defragmentation time afterwards. Is there a setting I am missing or is this an issue?

GK

Re: File Fragmentation

Posted: 2009-02-18 23:31
by botg
Defragment your disks and make sure you've got sufficient continuous blocks of free disk space.

Don't worry much about it, with the advent of cheap SSD drives, disk fragmentation is a relic of the past.

Re: File Fragmentation

Posted: 2009-02-19 23:05
by boco
OT: Trouble-free SSDs are not ready yet. Right now they suffer from internal fragmentation.

Do you transfer more than one files simultaneously? Depending on your FS, they could end up scattered because the OS tries to write to the next free block.

Re: File Fragmentation

Posted: 2009-02-19 23:25
by botg
What is "internal fragmentation"?

Re: File Fragmentation

Posted: 2009-02-19 23:47
by boco
There is an article about the problem.
http://www.pcper.com/article.php?aid=669
It does apply at least to the intel ones, but may be relevant for the others, too. The used technology is very similar.

Re: File Fragmentation

Posted: 2009-02-20 01:07
by GateKeeper2000
This happened as I was consolidating some data onto a drive for archiving, I had filled the drive to ~ 70% at which point I ran defrag (winXP built it - not the best but does ok) and got 0% fragmented and pretty contiguous free space. I then copied another 20GiB (200GB disc) of large files (300-500MiB) and thought I would just check the fragmentation, to my horror it was really high, one file of 350MB was in 312 segments! One at a time transfer - its slower to do it any other way esp. over a 1Gbps link.

I am not expecting 0% fragmentation but was really annoying to spend 4mins transfering data and then another 20 defraging the mess!

Its going to take a long while for the price of SSD to come down to the point where I can replace the 3+ TB of HDDs that are in use plus it isnt good to have crazy fragmentation whatever the hardware, the file system will add overhead for each non-contiguous segment due to the increased File Allocation Table size/complexity.

GK

Re: File Fragmentation

Posted: 2009-02-20 04:08
by boco
The more a drive gets full, the higher the fragmentation. I guess the filesystem is NTFS, in this case there are unmovable files scattered around and the large files cannot be stored contiguously anymore.

For defragmentation, http://kessels.com/jkdefrag.

Re: File Fragmentation

Posted: 2019-06-14 12:16
by magistar
With 1.7 GB of continuous free space on an 8 TB hard drive I am still seeing filezilla ftp server incomming files of 1 GB with 5000 fragments each. Is there a write buffer I can modify somewhere similar to what torrent clients allow? The server has 4GB of free ram so I could get at least 1 GB fragments. Os is Win10 with NTFS.

Re: File Fragmentation

Posted: 2019-06-14 15:05
by boco
With only 1.7GB left on 8TB disk, expect heavy fragmentation. Unless you meant 1.7TB.

FileZilla Server does not support pre-allocation like FileZilla Client does, already. Maybe it's planned for the coming re-write, the developer would have to answer that.

Other than that, the files are not placed by FileZilla Server, they are placed by the filesystem driver (ntfs.sys). How and why this one works that way, we cannot know, as it is a blackbox and only Microsoft knows its secrets.

Windows 10 will regularly defrag spinning disks. It will also defrag SSDs, to a lesser extent and for a different reason (filesystem limitation of max. file extents stored in metadata).