File Corruption from Power Cut - Suggestions

Need help with FileZilla Client? Something does not work as expected? In this forum you may find an answer.

Moderator: Project members

Post Reply
Message
Author
dvdmrrck
500 Syntax error
Posts: 12
Joined: 2009-02-10 12:31
First name: David
Last name: Merrick

File Corruption from Power Cut - Suggestions

#1 Post by dvdmrrck » 2021-03-23 09:28

I was transferring a 40 GB .7z file from one computer to another using FZ server and client, and there was a momentary power cut, and the computer had to be restarted. I 'resume'd the download in FZ and at the end did an MD5 and the final file checksum did not match the original. I'm sure this was caused by the power cut as none of the other files (about 1 TB) have had a problem. As the connection here is slow enough, it will take a good while to retransfer.

So my question is this - is the Resume function implemented ideally in such circumstances? for example, perhaps there should be an option of 'Resume after abrupt computer shutoff' (or FZ can somehow keep track of tidy closures and so assume any others were untidy) which, based on disk write cache size (supplied, likely or max) and any other relevant factors, filezilla would start transferring somewhat *earlier* than the existing file size so it redid a suitable extent of bytes before the end of the existing partial transfer before proceeding with the entirely new bytes extending the partial one...

As it is a 7z file, composed of many entries, I probably can fix it by extracting the extractable files, refretching the missing ones as a 7z and if needed MD5-ing all the files in source and destination although I think 7z will naturally have covered that with its own processes so long as I ensure the resulting files match as a list. However to have FZ cover these abrupt failures in an improved way I think would be the ideal as it would make that unnecessary and deal with files that that won't work with...

David

User avatar
botg
Site Admin
Posts: 35507
Joined: 2004-02-23 20:49
First name: Tim
Last name: Kosse

Re: File Corruption from Power Cut - Suggestions

#2 Post by botg » 2021-03-23 11:08

Which machine had to be rebooted, was it the source or the target machine of the download? Power outage during a write operation is always problematic, good way to corrupt the entire file system.

Resume is purely based on file name and file size. Resuming "somewhat earlier" is not feasible as there is no way to figure out what portion of the file actually made it to disk. Depending on the system's caching policy, this could be dozens of gigabytes given enough RAM.

The proper solution against sudden power loss is an Uninterruptible Power Supply, so that the system can perform an orderly shutdown in case of power loss. Basic models are quite affordable.

User avatar
boco
Contributor
Posts: 26910
Joined: 2006-05-01 03:28
Location: Germany

Re: File Corruption from Power Cut - Suggestions

#3 Post by boco » 2021-03-24 01:18

@botg: Before I used FileZilla, I used a client that offered such an option in the event of resuming. It would rollback 4KiB upon resuming, which was always enough as most corruptions were occurring at the tail end of the uploaded file.
No support requests over PM! You will NOT get any reply!!!
FTP connection problems? Please read Network Configuration.
FileZilla connection test: https://filezilla-project.org/conntest.php
FileZilla Pro support: https://customerforum.fileZilla-project.org

dvdmrrck
500 Syntax error
Posts: 12
Joined: 2009-02-10 12:31
First name: David
Last name: Merrick

Re: File Corruption from Power Cut - Suggestions

#4 Post by dvdmrrck » 2021-03-24 07:08

It was the client PC that suffered power out and needed restarting - I would think a server-side (that was providing the file) power out would not create a problem.

Like boco said the problem may well often be at the tail end (there could be a user option), and I suspect there's no harm in assuming it is and taking some remediation rather than having no remediation at all. I expect a disk will generally write its cache very quickly so I would anticipate it would be the tail end of a size equal to the download speed * a few seconds, adjusted up to some minimum level, which is rather small and I think would cover it unless some very intense disk-writing is going on outside FZ. I am not imagining the disk will have gigabytes it is writing (and so have adjusted the file size to a new one but not written the bytes). Backlog in RAM that hasn't been submitted to the disk system I would anticipate won't likely cause a problem since the file will still have integrity. At any rate, I do think some remediation is better than none, and I would imagine in most cases effective.

I'm also not sure if FZ can request a hash of a file range from the server and sometimes get it (here the server is FZ) but that could be a further possibility to explore for FZ to check the integrity of what was written versus what the server has.

d

Post Reply