Today was the first time I had tried to use Acrobat’s built-in web capture to capture a copy of an entire website. It didn’t go so well. I’ve heard that on smaller sites, it works really well, giving you a PDF version of all the pages it can spider on the site, but on this particular site my first couple of attempts got so far, and crashed. I do believe that we may have hit the limit on how large a site can be for this tool to be very effective.
I managed to get about 2000 pages collected and saved on my third attempt, stopping it before it could crash and then restarting the whole collection to run over night, and a coworker is using another tool to try to capture the site over night as well. We’ll see if we have more success with these.
Anyone out there had to try to capture a very large, commercial site? What did you use, and how successful were you?
Follow these topics: LitigationSupport