Web Capture in Acrobat 8

posted in: LitigationSupport 2 |
Reading Time: 1 minute

Today was the first time I had tried to use Acrobat’s built-in web capture to capture a copy of an entire website. It didn’t go so well. I’ve heard that on smaller sites, it works really well, giving you a PDF version of all the pages it can spider on the site, but on this particular site my first couple of attempts got so far, and crashed. I do believe that we may have hit the limit on how large a site can be for this tool to be very effective.

I managed to get about 2000 pages collected and saved on my third attempt, stopping it before it could crash and then restarting the whole collection to run over night, and a coworker is using another tool to try to capture the site over night as well. We’ll see if we have more success with these.

Anyone out there had to try to capture a very large, commercial site? What did you use, and how successful were you?

Technorati Tags: Acrobat, WebCapture

2 Responses

  1. Andy
    | Reply

    I’ve used http://www.httrack.com/ to get offline copies of a website before – works very well.

  2. Mike McBride
    | Reply

    I think that’s what my coworker tried, but I need to double check that. Her attempt crapped out after about 10,000 files. If it wasn’t what she was using, that’ll be the next tool in the lineup!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.