I just updated my Client to 3.5 a few days ago...
Last time I used FileZilla a few months ago to upload some large (40GB) backup files from my laptop to desktop, it took a while but they eventually transferred despite several disconnections (occasional disconnections are unfortunately normal for the internal WiFi board for my old Dell D600 laptop, so a successful Windows copy to the desktop's shared folder is impossible (I wasn't plugging into my FIOS router's ports because all four ports are taken).
The errors I'm getting after everything reconnects is "550 can't access file".
After a lot of head-scratching. I finally realized that when the network drops out, the FileZilla server keeps the files open on the old connections (at least until they time out), which should be fine if it reused the same connections - I use static IP numbers, so I'm not sure why it is creating new connections.
Every time the WiFi reconnects and transfer tries to resume on the new connections for each file being transferred simultaneously, if the server's old connections haven't timed out yet, the old connections still have the file(s) locked, preventing the new connections from writing to the file.
Even though I have the retry delay set to 5 minutes, and the error limit set to the max (99), the file access errors occur rapid fire, and thus very quickly exceeds the 99-error limit - within 10 seconds after the transfer tries to resume, the client has received its 99 errors, gives up, and moves the queued files to the failure queue.
On the server side, I can see the old abandoned connections displaying the green bar, which resembles a progress bar but isn't visible while transer is underway. The new connections pop up a similar yellow bar for a fraction of a second, the disk access error occurs, and the sequence repeats rapid-fire, until the client gets its 99 errors and gives up.
If I kick the users for all the old connections that have the green progress bar displayed, or if they time out, then the new connections can gain access to the file, the transfer restarts normally, and it will transfer several hundred megabytes or a GB more until the network drops out again - So then I have to kick all the old connections from the server and re-queue the files again.
I guess this is a very lengthy path to asking the question, "Can the server or client be set to use the same connections on resume, or else force the old connections to close before it tries to resume transferring the same file from the same IP number?
I can only guess that something changed since a few months ago, but I really can't say if it's the new client's fault, or something I changed in the settings since then - a few weeks ago, I was messing with timeouts, number of threads, and stuff like that while transferring a few smaller files just to see if anything affected the speed. I think I changed all that back to where I had it originally, though
Last edited by TXCharlie on 2011-05-30 23:12, edited 1 time in total.