Back up your Apache web directory and database with this simple script
I administer a lot of web sites. And all of these web sites need backup solutions. Since most of those web sites use LAMP servers it only made sense to set up a backup system using the available, included open source tools. It didn't take long to create a solid backup system and, with the help of cron, automate that system so that Apache's document root and the website databases were backed up regularly and without user intervention.
The script made use of the following tools: date, cat, tar, mv, and rm. That's it. The script will create backups with the date in the file name and then move the backups to a central location. Without further adieu, let's get to the script.
#Format the date in YEAR-MO-DY format
# Check to see if there is a lastbackup file in /tmp, if not create it,
# if so then set LAST equal to $TODAY
if [ -f /tmp/lastbackup ]; then
# Set the web directory backup name to the following
# Set database backup name to the following
# this tars up my web directory into web.tar.gz tarball.
/bin/tar -czf $TMP$WEB_FILENAME --after-date=$LAST /var/www/html
# Move the web back to the backup directory
/bin/mv $TMP$WEB_FILENAME /data
# Remove web backup file from temp director
# this tars up my database directory into $TODAY-db.tar.gz tarball.
/bin/tar -czf $TMP$DB_FILENAME --after-date=$LAST /var/lib/mysql
# Move the backup database to the backup directory
/bin/mv $TMP$DB_FILENAME /data
# Remove web backup file from temp directory
What I wanted this to do is create daily backups and move the backups to the /data directory on the drive housing the server. These backups will be saved for one month. After the month is completed i have a second script that deletes the months backups prior to running the next backup (so there is always a backup to fall to). How I made use of this script is simple. I save the script (called backup.sh) in the root user directory and create a second script called rm_backups.sh that looks like this:
With these two files in place I create two cron entries. The first cron entry is for running the backup.sh script and looks like:
0 23 * * *Â Â Â Â ~/backup.sh
The second cron entry is for running the rm_backups.sh script and looks like:
0 20 1 * *Â Â Â Â ~/rm_backups.sh
Both of the above cron jobs are created as the root user.
Naturally this solution could be easily modified (using such tools rsync) to set up an offsite backup solution. What should be obvious is that creating a simple, flexible server backup system on Linux is easy. With the help of a little ingenuity, you can create your own automated backup service.Advertisement
This looks great. One thing to keep in mind though, is that this doesn’t account for there being changes to the Filesystem while the backup is running (which if the websites hosted, is extremely like to be the case). You may want to add a lock to the database, as well as use something like mysqldump. Here’s what I would do:
mysqldump –all-databases > my_dump_file.sql
Note that this will lock the tables (below copied from mysql site):
This option is shorthand. It is the same as specifying –add-drop-table –add-locks –create-options –disable-keys –extended-insert –lock-tables –quick –set-charset. It should give you a fast dump operation and produce a dump file that can be reloaded into a MySQL server quickly.
The –opt option is enabled by default. Use –skip-opt to disable it. See the discussion at the beginning of this section for information about selectively enabling or disabling a subset of the options affected by –opt.
Also, before pulling the files, you probably would want to flush any pending writes to the filesystem. You can do this using:
fsfreeze -f mountpoint
#do the copy
fsfreeze -u mountpoint
That’s a good idea, thanks for pointing that out.