This project is read-only.

ZipFile.AddFile fails depending of the file size.


AddFile truncate the entry, and Extract trow an exception "bad read of entry test/MyFile.txt from compressed archive."
Debugging step by step sometimes work fine.
My code:
Private Sub Button3_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button3.Click
    Dim str As New String(" "c, 2490368)
    IO.File.WriteAllText("C:\test\MyFile.txt", str)
End Sub
Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click
    Using zip As New Ionic.Zip.ZipFile
    End Using
End Sub
Private Sub Button2_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button2.Click
    Using zip As Ionic.Zip.ZipFile = Ionic.Zip.ZipFile.Read("C:\test\")
        For Each arch As Ionic.Zip.ZipEntry In zip
    End Using
End Sub


Pointy wrote Aug 15, 2011 at 8:29 PM

Hi pablo,

I've reproduced the error on my machine - it looks like it's something to do with the parallel compression algorithm and your file size (which is a multiple of the default 128k buffer size for parallel compression: 131072 * 19 = 2490368). The zip file fails to extract in Windows Explorer, and WinZip reports a bad CRC ("Expected 216E2C97, actual 3832F8D9.").

However, if you disable parallel compression with "zip.ParallelDeflateThreshold = -1" in your Button1_Click it seems to avoid the issue and generates a valid zip file which Windows Explorer, WinZip and DotNetZip can all extract from. (Parallel compression also works fine for input data which is 1, 2, 3 or 4 multiples of 128k, but I think that's because it allocates them all to the same processor).

The code for the parallel deflate algorithm is waaaay more complex than I can cope with, so I can't really offer any fix better than the workaround above.

Hope this helps,


wrote Aug 16, 2011 at 1:53 PM

DexMiK wrote Aug 16, 2011 at 3:25 PM

DexMiK wrote Aug 16, 2011 at 3:26 PM

delete my last and this comment please, wrong work item :(

DexMiK wrote Aug 16, 2011 at 4:35 PM

ok I find somthing that might be releated to this, but i need further testing...

I zip mdb files (access) - alle closed for sure (we test that).

Now on some files (i cannot provide any yet, but I hope I can really soon) after zip.Saving them in the zip file (without errors),
i extract them, getting bad read of that file... (I assume its a crc check also that fails).

When i set zip.ParallelDeflateThreshold = -1 before zip.Save(...); it seems to work fine also.


wrote Aug 17, 2011 at 7:09 AM

wrote Sep 16, 2011 at 8:45 PM

XrstalLens wrote Sep 16, 2011 at 9:04 PM

I'm getting the same problem (bad read of entry) on a file of size 5373952 bytes, which is exactly 41 times the parallel buffer read size. The zip creation succeeds, but fails to extract. I've had the same error on other files, but didn't know of this problem and didn't know to look at the exact file size.

In my case I'm going to try working around it by turning off compression for any file that's an exact multiple of 128K in my AddProgress event handler (I already turn off compression for other files like mp3s) to see if that works around the problem.

rhpainte wrote Jan 6, 2012 at 7:30 PM

The issue is reproducable anytime you are compressing data that fits into an integeral number of internal buffers inside ParallelDeflateOutputStream (The default size is set to 64KB). When it has finished loading your data into the buffers Close is called which will result in _Flush(true) being called. Since we have an integeral size _currentlyFilling is equal to -1 and we go directly to EmitPendingBuffers(true, false) and _FlushFinish().

Inside EmitPendingBuffers the outside do{} while (doAll && (_lastWritten != _latestCompressed)) can exit prematurely as it doesn't take into account number of buffers filled, as our threads could still be busy computing blocks 3-15 with _lastWritten and _latestCompressed both equalling 2.

  1. The suggestion by Mike is great
  2. Modify the buffer size so that it is not nicely aligned with your file size

line 987:
} while (doAll && (_lastWritten != _latestCompressed));
} while (doAll && (_lastWritten != _latestCompressed || _lastWritten != _lastFilled));

wrote Aug 10, 2012 at 2:41 PM

wrote Aug 16, 2012 at 10:03 AM

wrote Sep 14, 2012 at 10:12 PM

wrote Oct 26, 2012 at 10:24 AM

wrote Nov 16, 2012 at 11:38 AM

divo wrote Nov 16, 2012 at 12:00 PM

This is a short sample program exhibiting the defect:

using System;
using System.IO;
using System.Linq;
using System.Text;
using Ionic.Zip;

namespace Issue14087
class Program
    static void Main(string[] args)
        // create a buffer (or file) that is 
        //  - larger than the default ParallelDeflateThreshold (= 512 kB)
        //  - has a size which is a multiple of 128 kB
        var buffer = new byte[917504];
        foreach (var i in buffer)
            buffer[i] = (byte)'a';

        using (var zippedStream = new MemoryStream())
            using (var zip = new ZipFile(Encoding.UTF8))
                // uncommenting the following line can be used as a work-around
                // zip.ParallelDeflateThreshold = -1;
                zip.AddEntry("entry.txt", buffer);
            zippedStream.Position = 0;

            using (var zip = ZipFile.Read(zippedStream))
                using (var ms = new MemoryStream())
                    // This line throws a BadReadException

divo wrote Nov 16, 2012 at 1:32 PM

The issue's impact is classified as "Low". I'd consider this as highly critical because the generated zip file basically is corrupt and data might be lost.

dagc wrote Jan 22, 2013 at 11:03 AM

I understand it can be hard to debug this bug, but it's very strange to flag this bug as a "low impact" bug since a zip library's objective is to create readable zip files. I experienced file corruption for a file I attempted to compress (it's size was 3801088 bytes = 29*131072), and setting ParallelDeflateThreshold to -1 fixed the issue. In addition, bugs like these are very hard to discover during testing.

wrote Feb 22, 2013 at 2:43 AM

mohammadforutan wrote Feb 27, 2013 at 1:08 PM

"zip.ParallelDeflateThreshold = -1" resolve my issue and save my day, thank you Mike

iosifpetre wrote Apr 11, 2013 at 5:31 PM

mohammadforutan, is nice that "zip.ParallelDeflateThreshold = -1" resolved your issue but how do I extract the files, that is when you create the file, but how to fix to get the file, I need the data from the files archived..
please help

divo wrote Apr 11, 2013 at 7:05 PM

@iosifpetre: There is no way to recover the zip archive once it is corrupted. That is why this issue is so critical. I'd recommend using another zip library like SharpZipLib or the functionality included in the current .NET Framework version.

bob0043 wrote May 3, 2013 at 7:27 PM

I concur with rhpainte's as to the location of the problem but differ slightly as to the analysis and fix.

The number of buffers used is partially dependent on the number of processors. Each set of buffers is handled by a separate thread. The variables _latestCompressed, _lastFilled, and _lastFilled keep track of, respectively, the last buffer that has been compressed, the last buffer that has been filled with input awaiting compression, and the last buffer written to the output.

The code is such that _latestCompressed <= _lastFilled is always true. The devil is in that "<" part of the expression. The EmitPendingBuffers functions, as written, is exiting when _lastWritten == _latestCompressed but that may not be the last buffer that needs to be written -- the last to be written should be _lastFilled. There is a race condition here: depending on input file size, buffer count, thread count, processor workload, and the phase of the moon, the _latestCompressed may or may not be equal to _lastFilled when EmitPendingBuffers checks it.

So the fix should be:
} while (doAll && (_lastWritten != _latestCompressed));
} while (doAll && (_lastWritten != _lastFilled));

This is in function EmitPendingBuffers. In the copy of the source I have that is line 971 of ParallelDeflateOutputStream.cs. Rhpainte notes it as line 987 of the same file.

Using "} while (doAll && (_lastWritten != _latestCompressed || _lastWritten != _lastFilled));", as rhpainte suggested, has a an "extra" check: if (_lastWritten == _lastFilled) is true then (_lastWritten == _latestCompressed) is also true (only compressed buffers are written) and there is no need to check it.

wrote May 3, 2013 at 7:28 PM

wrote May 15, 2013 at 11:11 AM

mlavoie88 wrote Jun 7, 2013 at 5:01 PM

I also just encountered this issue. Easy to work around with the ParallelDeflate setting, but difficult to isolate the failure since it occurs only on files of a specific size. I concur with earlier comments that the impact of this bug should be higher.

wrote Aug 20, 2013 at 4:04 PM

wrote Aug 27, 2013 at 9:12 PM

wrote Sep 25, 2013 at 3:09 PM

nmg196 wrote Sep 25, 2013 at 3:11 PM

Why is the Impact set to "low"? For me, and many others above, the result of this bug was a corrupt zip file, meaning the system using this library completely failed. Anything resulting in a corrupt zip file should surely result in a "High" impact level?

johnbuuck wrote Nov 26, 2013 at 5:46 PM

Yes, this bug fits very well the criteria for elevated priority.

(1) It will eventually affect most anyone that uses the product over a long period of time (since many binary file formats have sizes that are multiples of 64k)
(2) It doesn't manifest in a straight-forward error that can be recovered from programmatically or manually
(3) It's impact is the irretrievable loss of data
(4) The problem is only discovered (possibly much) later when attempting to unzip the file

It would be much better to simply issue an error whenever "Parallel Deflate" is active and the file size is a multiple of the internal buffer size. That way the user is not fooled into thinking they have a retrievable archive of the file. The error message could explain the problem and suggest a work-around. Instead, more and more corrupt archives get created with no notification until eventually someone attempts to retrieve the file.

wrote Dec 19, 2013 at 12:20 PM

wrote Dec 20, 2013 at 8:11 PM

wrote Jan 10, 2014 at 8:22 AM

wrote Jan 15, 2014 at 12:49 PM

wrote Feb 20, 2014 at 10:44 AM

wrote Mar 11, 2014 at 4:35 PM

wrote Mar 14, 2014 at 5:45 PM

wrote Mar 17, 2014 at 10:01 AM

mrutter wrote Apr 10, 2014 at 12:35 PM

Hello everybody,
I agree with many others above: the impact of this bug should be elevated.

Today, I encountered this bug after hundreds of days where my application worked fine. I had, for the first time, a source file of 28,639,232 (which is 437 times 64K)!

It looks like this project is not active anymore ... isn't it?

Thank you very much to the guy who posted a valid workaround (ParallelDeflateThreshold = -1 worked for me too).

wrote Apr 10, 2014 at 1:10 PM

wrote Apr 29, 2014 at 12:11 AM

Jcwrequests wrote Apr 29, 2014 at 12:15 AM

I have also experienced the same issue which I commented on here Please make this a high priority item.


Jason Wyglendowski

wrote Jun 6, 2014 at 1:43 PM

wrote Jun 8, 2014 at 1:41 AM

wrote Aug 7, 2014 at 2:09 PM

wrote Aug 26, 2014 at 3:57 PM

ditto3 wrote Nov 4, 2014 at 1:28 AM

It's a bug - you can fix it in Zlib\ParallelDeflateOutputStream.cs.

} while (doAll && (_lastWritten != _latestCompressed));

} while (doAll && (_lastWritten != _lastFilled));

See for more info.

share|improve this answer

answered Aug 20 '13 at 15:08


wrote Nov 4, 2014 at 2:24 AM

wrote Nov 25, 2014 at 3:17 PM

Serg_G wrote Nov 25, 2014 at 4:24 PM

I reproduced the bug. And created zip also cannot be open by WinRar or 7Zip as corrupted. But it's correctly unpacked by System.IO.Compression.ZipArchive. I cannot understand why.

Birbilis wrote Feb 3, 2015 at 12:20 PM

wrote Jun 15, 2015 at 11:12 AM

milan_m wrote Aug 1, 2015 at 9:00 PM

patch 14368 resolves this issue?
No, it does not.

The same is with suggestion by bob0043 wrote May 3, 2013 at 8:27 PM, repeated by ditto3 wrote Nov 4, 2014 at 2:28 AM:
} while (doAll && (_lastWritten != _latestCompressed));
} while (doAll && (_lastWritten != _lastFilled));

This didn't work for me, too.
I have a file 1,270,743,040 bytes long, Intel I7 processor (8 threads).
The only workaround was ParallelDeflateThreshold = -1. But then it is very slow... :(

wrote Dec 4, 2015 at 1:17 PM

wrote Jan 10, 2016 at 9:07 AM

wrote Jan 25, 2016 at 6:48 PM

Vyach wrote Mar 24, 2016 at 4:55 PM

No fix yet? This issue causes us a lot of grief :(. Turning ParallelDeflateThreshold off slows down zipping process at least twice which is unacceptable.

ilengyel wrote Apr 13, 2016 at 10:53 AM

It appears that this space is dead. Head over to which appears to be the most active continuation of this library.

I wonder who do we need to notify to de-register this space to remove any confusion.