Should/Can I do any optimization when using DotNetZip?

Jul 1, 2011 at 1:45 PM

I'm currently using a single backgroundworkerthread to zip up multiple zip files (each of which can be quite large).

I remember reading there was optimization done already in the backend, and I just wanted to make sure there was something I could do to possibly improve the speeds in a noticeable way.

Wonderful library. it certainly saved my skin on this project.

Jul 5, 2011 at 1:43 AM

Show your code - but generally there is no "optimization" you need to do, if you are using efficient streaming. Eg, in some cases people fill buffers unnecessarily, and there are simpler ways to do things by using the public documented API correctly.  But memory fills are generally small in cost when compared with compression and encryption, so this really is a minor issue. 

To tweak speeds you can decrease the compression ratio. But that makes the resulting zipfile larger. May not be what you want.

Jul 5, 2011 at 2:49 PM
Edited Jul 5, 2011 at 2:50 PM

Hmm. Well, I'm not really doing any of the stream stuff directly. I'm just using the methods provided by your library.

It's a bit messy since I haven't cleaned out some old code I was playing with awhile ago. It's basically just a foreach that iterates through an array of classes I made and creates a zip file for each  with a  using (ZipFile...

I get the impression from what you said there isn't much I can do (with my level of knowledge) which is fine. If that's the case, I appreciate the answer.

Jul 5, 2011 at 7:48 PM

Looks reasonable to me.