Most web browsers let you download individual images with relative ease. It usually takes only a couple of clicks to do so.
You will run into issues when you try to download multiple images displayed on a page, or pages. While it still works to select images individually for download, it takes a lot of time to do so. Time, that is better spend doing something else.
Bulk Image Downloader looks on first glance just like any other mass-downloader out there. But if you spend some time getting used to it, you will realize that it is probably the most sophisticated program in this niche that you can get you hands on.
Several features set the program apart, including its excellent parser and automation, but also the way multiple pages are crawled by the application, and options to use variables in addresses.
Bulk Image Downloader
The installation of the program should not pose any troubles to you. The setup is clean and does not contain any third party offers.
Once done, you get the option to launch the main interface and a small drop box which you can use to drag and drop addresses on to.
Before you start to download your first batch of images. you may want to jump to the configuration first. Important settings listed here include:
- The maximum number of pages per address that get downloaded (set to 20 by default). What this means is that if you select to download all images from reddit.com/r/aww/, Bulk Image Downloader will automatically parse the frontpage and the 19 pages that follow for images to add them to the download queue.
- Integration in Internet Explorer and Opera. Firefox users can use the BID extension for their browser, and Chrome users the browser extension for theirs.
- The maximum number of download threads (5 by default).
- Define a minimum or maximum file size for picture downloads.
You may also want to define the save directory in the main Bulk Image Downloader window to make sure images are saved into an appropriate location on your system.
Using the program
It could not be easier to use the Bulk Image Downloader application. All you have to do is add a web address to the program, either by dragging and dropping it on the drop box, or by adding it to the main interface directly.
The program will start to parse the url based on the selected configuration automatically. If things go well, you will soon see image thumbnails in the lower half of the screen indicating that images have been found that can be downloaded.
Above that, you find filtering options that you need to know about. BID will display full sized images only by default and download those once you give the command. This is usually the lowest number of images displayed in the filter toolbar. You can switch that to display all images found on a page, or only embedded pictures.
You can select items individually here for download, or hit the download button to download them all in rapid succession. The page title is used by default as the folder the images are stored in. You can change the title before you start the process if you like. It may make sense for example to add the address the images have been saved from to the folder information.
Existing images will be overwritten by default, which you can also change in the main interface. You can either have them skipped automatically, or renamed automatically so that they are saved and the existing image is preserved.
Tip: You can use the Queue Manager to add multiple addresses at once to the program that you want processed. It is alternatively possible to simply paste multiple urls into the main interface one after the other, as images that are discovered during the parsing stage are appended automatically to the queue. You will end up with them being saved into a single directory structure though.
The Queue Manager displays all jobs that are currently being processed. One interesting feature of it is the ability to schedule jobs. If you want images to be downloaded during a specific time of the day, you can make that configuration here.
You can add urls to the queue manager directly, which is great for bulk importing them.
Selecting the download range manually
You can use variables to define the download range manually. This usually requires that you understand the url structure of the website that you want to download images from. If it uses a sequential structure, e.g. page/1/, page/2/, page /100/, then you can define the range easily using the following syntax:
This will parse page 1 to page 10 of the address. Pages that do not exist will be skipped automatically. I would advise you to select a range that is not too large, as you may run into slow downs if you select to parse 100 pages and download images from them, especially if those pages contain hundreds of images each.
What is interesting about this is that it will override the page limit that you have set in the application. If you select to download images from 30 pages, Bulk Image Downloader will do so.
That's however not the only option that you have here. You can also make use of advanced range specifiers:
- example.com/gallery/page[1,s-10].html - Will skip the first page, and download images from all pages up to page 10 (the s means skip)
- example.com/album[1-10,A]/pics[A]_[001-100].jpg - Uses the label A defined in [1-10,A] for identification of pictures as well.
There are a couple of other features that you may like. You can use it to download files from password protected websites for example (sites that require authorization), have the program load cookies automatically for that very same purpose (from a selected web browser), or use the integrated Link Explore to pick download links from a list of links discovered.
Bulk Image Downloader is getting better with every release. This is the program to have if you are downloading images regularly on the Internet. It works with the majority of sites out there, including Facebook, Flickr, Reddit, Imgur, and many others, is highly flexible thanks to its advanced syntax, and does most of the work for you without you even realizing it.