Create publicly available web page archives with Archive.is

Web pages can change from one moment to the next. Entire websites may go down and take contents with them, content may be edited or removed, or sites may become unavailable because of technical issues.
If you need access to information, or want to save a copy of them to make sure you can access them at all time, then you have several options at your disposal to do so.
Probably the easiest is to save the web page to your local system. Hit Ctrl-s while on it, pick a descriptive name and local directory, and all of the contents are saved to the computer you are working on. Extensions like Mozilla Archive Format improve that further by saving all contents to a single file.
Another option is to take a screenshot of the page or part of it instead. This works as well, has the advantage that you save a single file but the disadvantage that you cannot copy text.
Tip: Firefox users hit Shift-F2, type screenshot and then enter to create a screenshot of the active web page. Chrome users can save web pages as PDF documents natively instead.
The third local option comes in the form of website archivers. Programs like Httrack crawl websites for you and save all contents to a local directory that you can then browse at anytime even without Internet connection.
Remote options can be useful as well. The most popular is without doubt offered by Archive.org as it creates automatic snapshots of popular Internet pages that you can access then. Want to see one of the first versions of Ghacks? Here you go.
The downside is that you cannot control what is saved.
Archive.is is a free service that helps you out. To use it, paste a web address into the form on the services main page and hit submit url afterwards.
The service takes two snapshots of that page at that point in time and makes it available publicly.
The first takes a static snapshot of the site. You find images, text and other static contents included while dynamic contents and scripts are not.
The second snapshot takes a screenshot of the page instead.
An option to download the data is provided. Note that this downloads the textual copy of the site only and not the screenshot.
A Firefox add-on has been created for the service which may be useful to some of its users. It creates automatic snapshots of every web page that you bookmark in the web browser after installation of the add-on.
Word of warning: All snapshots are publicly available. While pages that require authentication cannot be saved by the service, it may still take snapshots of pages that you may not want to reveal to the public.
An option to password protect snapshots or protect them using accounts would certainly be useful in this regard.
The service can prove useful in other situations. For instance, if you cannot access a resource on the Internet, then you may still access it by using Archive.is instead. While that provides access to text and image information only, it should be sufficient in most cases.
Closing Words
Archive.is is a useful but specialized service. It works well right out of the box but would benefit from protective features or an optional account system. All in all though, it can be quite handy at times to save web page information permanently in another location on the Internet.





Doesn’t Windows 8 know that www. or http:// are passe ?
Well it is a bit difficulty to distinguish between name.com domains and files for instance.
I know a service made by google that is similar to Google bookmarks.
http://www.google.com/saved
@Ashwin–Thankful you delighted my comment; who knows how many “gamers” would have disagreed!
@Martin
The comments section under this very article (3 comments) is identical to the comments section found under the following article:
https://www.ghacks.net/2023/08/15/netflix-is-testing-game-streaming-on-tvs-and-computers/
Not sure what the issue is, but have seen this issue under some other articles recently but did not report it back then.
Omg a badge!!!
Some tangible reward lmao.
It sucks that redditors are going to love the fuck out of it too.
With the cloud, there is no such thing as unlimited storage or privacy. Stop relying on these tech scums. Purchase your own hardware and develop your own solutions.
This is a certified reddit cringe moment. Hilarious how the article’s author tries to dress it up like it’s anything more than a png for doing the reddit corporation’s moderation work for free (or for bribes from companies and political groups)
Almost al unlmited services have a real limit.
And this comment is written on the dropbox article from August 25, 2023.
First comment > @ilev said on August 4, 2012 at 7:53 pm
For the God’s sake, fix the comments soon please! :[
Yes. Please. Fix the comments.
With Google Chrome, it’s only been 1,500 for some time now.
Anyone who wants to force me in such a way into buying something that I can get elsewhere for free will certainly never see a single dime from my side. I don’t even know how stupid their marketing department is to impose these limits on users instead of offering a valuable product to the paying faction. But they don’t. Even if you pay, you get something that is also available for free elsewhere.
The algorithm has also become less and less savvy in terms of e.g. English/German translations. It used to be that the bot could sort of sense what you were trying to say and put it into different colloquialisms, which was even fun because it was like, “I know what you’re trying to say here, how about…” Now it’s in parts too stupid to translate the simplest sentences correctly, and the suggestions it makes are at times as moronic as those made by Google Translations.
If this is a deep-learning AI that learns from users’ translations and the phrases they choose most often – which, by the way, is a valuable, moneys worthwhile contribution of every free user to this project: They invest their time and texts, thereby providing the necessary data for the AI to do the thing as nicely as they brag about it in the first place – alas, the more unprofessional users discovered the translator, the worse the language of this deep-learning bot has become, the greater the aggregate of linguistically illiterate users has become, and the worse the language of this deep-learning bot has become, as it now learns the drivel of every Tom, Dick and Harry out there, which is why I now get their Mickey Mouse language as suggestions: the inane language of people who can barely spell the alphabet, it seems.
And as a thank you for our time and effort in helping them and their AI learn, they’ve lowered the limit from what was once 5,000 to now 1,500…? A big “fuck off” from here for that! Not a brass farthing from me for this attitude and behaviour, not in a hundred years.