Most web browsers let you download individual images with relative ease. It usually takes only a couple of clicks to do so.
You will run into issues when you try to download multiple images displayed on a page, or pages. While it still works to select images individually for download, it takes a lot of time to do so. Time, that is better spend doing something else.
Bulk Image Downloader looks on first glance just like any other mass-downloader out there. But if you spend some time getting used to it, you will realize that it is probably the most sophisticated program in this niche that you can get you hands on.
Several features set the program apart, including its excellent parser and automation, but also the way multiple pages are crawled by the application, and options to use variables in addresses.
The installation of the program should not pose any troubles to you. The setup is clean and does not contain any third party offers.
Once done, you get the option to launch the main interface and a small drop box which you can use to drag and drop addresses on to.
Before you start to download your first batch of images. you may want to jump to the configuration. Important settings listed here include:
You may also want to set the save directory in the main Bulk Image Downloader window to make sure images are saved into an appropriate location on your system.
It could not be easier to use the Bulk Image Downloader application. All you have to do is add a web address to the program, either by dragging and dropping it on the drop box, or by adding it to the main interface directly.
The program starts to parse the url based on the selected configuration automatically. If things go well, you will soon see image thumbnails in the lower half of the screen indicating that images have been found that can be downloaded.
Above that, you find filtering options that you need to know about. BID will display full sized images only by default and download those once you give the command. This is usually the lowest number of images displayed in the filter toolbar. You can switch that to display all images found on a page, or only embedded pictures.
This means that smaller images, thumbnails for instance or icons, are not listed by default. This makes sense, as users may not want those when they download images from the Internet.
You can select items individually here for download, or hit the download button to download them all in rapid succession. The page title is used by default as the folder the images are stored in. You can change the title before you start the process if you like. It may make sense for example to add the address the images have been saved from to the folder information.
Existing images will be overwritten by default, which you can also change in the main interface. You can either have them skipped automatically, or renamed automatically so that they are saved and the existing image is preserved.
Tip: You can use the Queue Manager to add multiple addresses at once to the program that you want processed. It is alternatively possible to simply paste multiple urls into the main interface one after the other, as images that are discovered during the parsing stage are appended automatically to the queue by default. You will end up with them being saved into a single directory structure though.
The Queue Manager displays all jobs that are currently being processed. One interesting feature of it is the ability to schedule jobs. If you want images to be downloaded during a specific time of the day, you can make that configuration here.
You can add urls to the queue manager directly, which is great for bulk importing them.
Selecting the download range manually
You can use variables to define the download range manually. This usually requires that you understand the url structure of the website that you want to download images from. If it uses a sequential structure, e.g. page/1/, page/2/, page /100/, then you can define the range easily using the following syntax:
This will parse page 1 to page 10 of the address. Pages that do not exist will be skipped automatically. I would advise you to select a range that is not too large, as you may run into slow downs if you select to parse 100 pages and download images from them, especially if those pages contain hundreds of images each.
What is interesting about this is that it will override the page limit that you have set in the application. If you select to download images from 30 pages, Bulk Image Downloader will do so.
That's however not the only option that you have here. You can also make use of advanced range specifiers:
There are a couple of other features that you may like. You can use it to download files from password protected websites for example (sites that require authorization), have the program load cookies automatically for that very same purpose (from a selected web browser), or use the integrated Link Explore to pick download links from a list of links discovered.
Bulk Image Downloader 5 is a major upgrade of the application. The new version introduced several new features and options to the application which all of its users will benefit from.
This includes, among many other options better support for popular websites such as Facebook, Pinterest, or Flickr, support for Windows 10, better memory handling, and improved cookies handling.
Bulk Image Downloader is getting better with every release. This is the program to have if you are downloading images regularly on the Internet. It works with the majority of sites out there, including Facebook, Flickr, Reddit, Imgur, and many others, is highly flexible thanks to its advanced syntax, and does most of the work for you without you even realizing it.
Advertising revenue is falling fast across the Internet, and independently-run sites like Ghacks are hit hardest by it. The advertising model in its current form is coming to an end, and we have to find other ways to continue operating this site.
We are committed to keeping our content free and independent, which means no paywalls, no sponsored posts, no annoying ad formats (video ads) or subscription fees.
If you like our content, and would like to help, please consider making a contribution:
Ghacks is a technology news blog that was founded in 2005 by Martin Brinkmann. It has since then become one of the most popular tech news sites on the Internet with five authors and regular contributions from freelance writers.