Website Downloader: download entire Wayback Machine site archives

Website Downloader is a relatively new service that enables you to download individual pages or entire archives of websites on the Wayback Machine site.
Update: Website Downloader is no longer free. You are asked to pay before you get to see a single bit of the specified website. It is not recommended anymore. The only free solution that I'm aware of right now is Wayback Machine Downloader. It is a Ruby script, however, and requires more or less setup time depending on the operating system that you are using. Archivarix is an online service that is good for 200 free files from the archive. If the site is small, this may do as well. All other services are either not working anymore, or paid services. End
The Wayback Machine, part of the Internet Archive, is a very useful service. It enables you to browse website snapshots recorded by the site's crawler.
You may use it to check out past versions of a page on the Internet, or access pages that are permanently or temporarily not available. It is also a great option to recover web pages as a webmaster that are not accessible anymore (maybe because your hosting company terminated the account, or because of data corruption and lack of backups).
Several browser extensions, Wayback Fox for Firefox or Wayback Machine for Chrome and Firefox make use of the Wayback Machine's archive to provide users with copies of pages that are not accessible.
Website Downloader
While you can download any page on the Wayback Machine website using your web browser's "Save Page" functionality, doing so for an entire website may not be feasible depending on its size. Not a problem if a site has just a few pages, but if it has thousands of them, you'd spend entire weeks downloading those pages manually.
Enter Website Downloader: the free service lets you download a website's entire archive to the local system.
All you have to do is type the URL that you want to download on the Website Downloader site, and select whether you want to download the homepage only, or the entire website.
Note: It may take minutes or longer for the site to be processed by Website Downloader.
Here is a short video that demonstrates the functionality:
The process itself is straightforward. The service grabs each HTML file of the site (or just one if you select to download a single URL), and clones it to the local hard drive of the computer. Links are converted automatically so that they can be used off-line, and images, PDF documents, CSS and JavaScript files are downloaded and referenced correctly as well.
You may download the copy of the site as a zip file to your local system after the background process completes, or use the service to get a quote and get the copy converted to a WordPress site.
Closing Words
Website Downloader is an interesting service. It was swarmed with requests at the time of the review, and you may also experience that the generation of website downloads, even of single pages, takes longer than it should because of that.
There is also the chance that some people will abuse the service by downloading entire websites, and publishing them again on the Internet.
Now You: What's your take on website downloader?


Doesn’t Windows 8 know that www. or http:// are passe ?
Well it is a bit difficulty to distinguish between name.com domains and files for instance.
I know a service made by google that is similar to Google bookmarks.
http://www.google.com/saved
@Ashwin–Thankful you delighted my comment; who knows how many “gamers” would have disagreed!
@Martin
The comments section under this very article (3 comments) is identical to the comments section found under the following article:
https://www.ghacks.net/2023/08/15/netflix-is-testing-game-streaming-on-tvs-and-computers/
Not sure what the issue is, but have seen this issue under some other articles recently but did not report it back then.
Omg a badge!!!
Some tangible reward lmao.
It sucks that redditors are going to love the fuck out of it too.
With the cloud, there is no such thing as unlimited storage or privacy. Stop relying on these tech scums. Purchase your own hardware and develop your own solutions.
This is a certified reddit cringe moment. Hilarious how the article’s author tries to dress it up like it’s anything more than a png for doing the reddit corporation’s moderation work for free (or for bribes from companies and political groups)
Almost al unlmited services have a real limit.
And this comment is written on the dropbox article from August 25, 2023.
First comment > @ilev said on August 4, 2012 at 7:53 pm
For the God’s sake, fix the comments soon please! :[
Yes. Please. Fix the comments.
With Google Chrome, it’s only been 1,500 for some time now.
Anyone who wants to force me in such a way into buying something that I can get elsewhere for free will certainly never see a single dime from my side. I don’t even know how stupid their marketing department is to impose these limits on users instead of offering a valuable product to the paying faction. But they don’t. Even if you pay, you get something that is also available for free elsewhere.
The algorithm has also become less and less savvy in terms of e.g. English/German translations. It used to be that the bot could sort of sense what you were trying to say and put it into different colloquialisms, which was even fun because it was like, “I know what you’re trying to say here, how about…” Now it’s in parts too stupid to translate the simplest sentences correctly, and the suggestions it makes are at times as moronic as those made by Google Translations.
If this is a deep-learning AI that learns from users’ translations and the phrases they choose most often – which, by the way, is a valuable, moneys worthwhile contribution of every free user to this project: They invest their time and texts, thereby providing the necessary data for the AI to do the thing as nicely as they brag about it in the first place – alas, the more unprofessional users discovered the translator, the worse the language of this deep-learning bot has become, the greater the aggregate of linguistically illiterate users has become, and the worse the language of this deep-learning bot has become, as it now learns the drivel of every Tom, Dick and Harry out there, which is why I now get their Mickey Mouse language as suggestions: the inane language of people who can barely spell the alphabet, it seems.
And as a thank you for our time and effort in helping them and their AI learn, they’ve lowered the limit from what was once 5,000 to now 1,500…? A big “fuck off” from here for that! Not a brass farthing from me for this attitude and behaviour, not in a hundred years.