Tutorial

How To Backup MySQL Databases on an Ubuntu VPS

Published on August 28, 2013
How To Backup MySQL Databases on an Ubuntu VPS

Status: Deprecated

This article covers a version of Ubuntu that is no longer supported. If you are currently operate a server running Ubuntu 12.04, we highly recommend upgrading or migrating to a supported version of Ubuntu:

Reason: Ubuntu 12.04 reached end of life (EOL) on April 28, 2017 and no longer receives security patches or updates. This guide is no longer maintained.

See Instead:
This guide might still be useful as a reference, but may not work on other Ubuntu releases. If available, we strongly recommend using a guide written for the version of Ubuntu you are using. You can use the search functionality at the top of the page to find a more recent version.

What is MySQL?

MySQL is a popular database management solution that uses the SQL querying language to access and manipulate data. It can easily be used to manage the data from websites or applications.

Backups are important with any kind of data, and this is especially relevant when talking about databases. MySQL can be backed up in a few different ways that we will discuss in this article.

For this tutorial, we will be using an Ubuntu 12.04 VPS with MySQL 5.5 installed. Most modern distributions and recent versions of MySQL should operate in a similar manner.

How to Backup a MySQL Database with mysqldump

One of the most common ways of backing up with MySQL is to use a command called "mysqldump".

Backing Up

There is an article on how to export databases using mysqldump here. The basic syntax of the command is:

mysqldump -u username -p database_to_backup > backup_name.sql

Restoring

To restore a database dump created with mysqldump, you simply have to redirect the file into MySQL again.

We need to create a blank database to house the imported data. First, log into MySQL by typing:

mysql -u username -p

Create a new database which will hold all of the data from the data dump and then exit out of the MySQL prompt:

CREATE DATABASE database_name;
exit

Next, we can redirect the dump file into our newly created database by issuing the following command:

mysql -u username -p database_name < backup_name.sql

Your information should now be restored to the database you've created.

How to Backup a MySQL Table to a Text File

You can save the data from a table directly into a text file by using the select statement within MySQL.

The general syntax for this operation is:

SELECT * INTO OUTFILE 'table_backup_file' FROM name_of_table;

This operation will save the table data to a file on the MySQL server. It will fail if there is already a file with the name chosen.

Note: This option only saves table data. If your table structure is complex and must be preserved, it is best to use another method!

How to Backup MySQL Information using automysqlbackup

There is a utility program called "automysqlbackup" that is available in the Ubuntu repositories.

This utility can be scheduled to automatically perform backups at regular intervals.

To install this program, type the following into the terminal:

sudo apt-get install automysqlbackup

Run the command by typing:

sudo automysqlbackup

The main configuration file for automysqlbackup is located at "/etc/default/automysqlbackup". Open it with administrative privileges:

sudo nano /etc/default/automysqlbackup

You can see that this file, by default, assigns many variables by the MySQL file located at "/etc/mysql/debian.cnf". This contains maintenance login information

From this file, it reads the user, password, and databases that must be backed up.

The default location for backups is "/var/lib/automysqlbackup". Search this directory to see the structure of the backups:

ls /var/lib/automysqlbackup
daily  monthly weekly

If we look into the daily directory, we can see a subdirectory for each database, inside of which is a gzipped sql dump from when the command was run:

ls -R /var/lib/automysqlbackup/daily
.:
database_name  information_schema  performance_schema

./database_name:
database_name_2013-08-27_23h30m.Tuesday.sql.gz

./information_schema:
information_schema_2013-08-27_23h30m.Tuesday.sql.gz

./performance_schema:
performance_schema_2013-08-27_23h30m.Tuesday.sql.gz

Ubuntu installs a cron script with this program that will run it every day. It will organize the files to the appropriate directory.

How to Backup When Using Replication

It is possible to use MySQL replication to backup data with the above techniques.

Replication is a process of mirroring the data from one server to another server (master-slave) or mirroring changes made to either server to the other (master-master).

While replication allows for data mirroring, it suffers when you are trying to save a specific point in time. This is because it is constantly replicating the changes of a dynamic system.

To avoid this problem, we can either:

  • Disable replication temporarily
  • Make the backup machine read-only temporarily

Disabling Replication Temporarily

You can disable replication for the slave temporarily by issuing:

mysqladmin -u user_name -p stop-slave

Another option, which doesn't completely stop replication, but puts it on pause, so to speak, can be accomplished by typing:

mysql -u user_name -p -e 'STOP SLAVE SQL_THREAD;'

After replication is halted, you can backup using one of the methods above. This allows you to keep the master MySQL database online while the slave is backed up.

When this is complete, restart replication by typing:

mysqladmin -u user_name -p start-slave

Making the Backup Machine Read-Only Temporarily

You can also ensure a consistent set of data within the server by making the data read-only temporarily.

You can perform these steps on either the master or the slave systems.

First, log into MySQL with enough privileges to manipulate the data:

mysql -u root -p 

Next, we can write all of the cached changes to the disk and set the system read-only by typing:

FLUSH TABLES WITH READ LOCK;
SET GLOBAL read_only = ON;

Now, perform your backup using mysqldump.

Once the backup is complete, return the system to its original working order by typing:

SET GLOBAL read_only = OFF;
UNLOCK TABLES;

A Note About Techniques That Are No Longer Recommended

mysqlhotcopy

MySQL includes a perl script for backing up databases quickly called "mysqlhotcopy". This tool can be used to quickly backup a database on a local machine, but it has limitations that make us avoid recommending it.

The most important reason we won't cover mysqlhotcopy's usage here is because it only works for data stored using the "MyISAM" and "Archive" storage engines.

Most users do not change the storage engine for their databases and, starting with MySQL 5.5, the default storage engine is "InnoDB". This type of database cannot be backed up using mysqlhotcopy.

Another limitation of this script is that it can only be run on the same machine that the database storage is kept. This prevents running backups from a remote machine, which can be a major limitation in some circumstances.

Copying Table Files

Another method sometimes suggested is simply copying the table files that MySQL stores its data in.

This approach suffers for one of the same reasons as "mysqlhotcopy".

While it is reasonable to use this technique with storage engines that store their data in files, InnoDB, the new default storage engine, cannot be backed up in this way.

Conclusion

There are many different methods of performing backups in MySQL. All have their benefits and weaknesses, but some are much easier to implement and more broadly useful than others.

The backup scheme you choose to deploy will depend heavily on your individual needs and resources, as well as your production environment. Whatever method you decide on, be sure to validate your backups and practice restoring the data, so that you can be sure that the process is functioning correctly.

By Justin Ellingwood

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products

About the author(s)

Justin Ellingwood
Justin Ellingwood
See author profile
Category:
Tutorial

Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
27 Comments
Leave a comment...

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

I would highly suggest using the --single-transaction parameter on large databases, otherwise INSERTS and UPDATE will wait until the current table is done being dump, and if you run a high transaction database those connections will just pile up and then you’ll get the infamous “too many connections” error

But, doesn’t this apply only to InnoDB tables? As per the MySQL docs, this parameter will not help dump either MyISAM or the MySQL memory tables in a consistent state. So, this may not be helpful for web applications that use both InnoDB and MyISAM tables - Magento is a good example.

alexdo
Site Moderator
Site Moderator badge
August 20, 2024

You’re absolutely right. When dealing with large databases and high-transaction environments, using the --single-transaction parameter with mysqldump is crucial. This option helps ensure that your dump is consistent without locking tables, which is especially important for busy databases where long-running locks can lead to performance issues and “too many connections” errors.

The --single-transaction option instructs mysqldump to perform a backup in a consistent state using a single transaction. This means that all tables are backed up in a consistent snapshot of the database at the start of the transaction, avoiding the need to lock tables for the duration of the dump.

Regards

alexdo
Site Moderator
Site Moderator badge
August 22, 2024

For large, high-transaction databases, the following command is recommended:

mysqldump -u [username] -p[password] --single-transaction --routines --triggers --events [database_name] > /path/to/backup/[backup_file].sql

E.g

mysqldump -u root -pmypassword --single-transaction --routines --triggers --events

Regards

May I recommend Xtrabackup from Percona. http://www.percona.com/doc/percona-xtrabackup/2.1/

alexdo
Site Moderator
Site Moderator badge
August 20, 2024

Percona XtraBackup is indeed a powerful tool for performing hot backups of MySQL and MariaDB databases.

Unlike mysqldump, which locks tables and can be slower for large databases, XtraBackup allows for non-blocking, consistent backups of InnoDB, MyISAM, and other MySQL storage engines without disrupting the ongoing transactions. This makes it particularly valuable for high-traffic databases or those requiring high availability.

Regards

Its important uses ‘–routines’ arg for mysqldump exports all triggers, views, procedures and functions of your database.

mysqldump -u root -p --routines … > out.sql

alexdo
Site Moderator
Site Moderator badge
August 22, 2024

Heya,

Including the --routines argument in the mysqldump command is crucial for backing up not just the data and table structures, but also the stored procedures, functions, triggers, and views associated with your database. This ensures a complete backup of all the logical components that your application might depend on.

Regards

My extracted back up files are empty. Did I miss something?

alexdo
Site Moderator
Site Moderator badge
August 20, 2024

If your extracted backup files are empty, there could be a few possible reasons

Ensure that the mysqldump command in your script is correct and includes the appropriate flags for the data you want to back up. Also check the size of the .sql file before and after compression (if applicable). If the uncompressed file size is very small or 0 bytes, the dump likely failed.

mysqldump -u [username] -p[password] [database_name] > /path/to/backup/[backup_file].sql

Regards

@ Milo Felipe,

Can i have the mysqldump command ? Did you check your current DB size ? so that i can give a suggestion.

“For larger databases, where mysqldump would be impractical or inefficient, you can back up the raw data files instead”

http://dev.mysql.com/doc/mysql-backup-excerpt/5.6/en/replication-solutions-backups.html

I am going to add a 1GB slave server, where I will make my backups.

Can I by cron job disable the replication on the slave server, then do a mysqlDump?

alexdo
Site Moderator
Site Moderator badge
August 20, 2024

Yes, you can temporarily stop the replication on your MySQL slave server to perform a mysqldump, and then restart the replication afterward. This method ensures that your backup is consistent and that there are no conflicts between the ongoing replication process and the dump operation.

I’ll recommend to add checks if the SLAVE is running before stopping it and also check if SLAVE has started after the START SLAVE command.

Ensure that there is enough disk space for both the raw backup and the compressed file.

Running mysqldump on large databases can still be resource-intensive, so plan the timing of these jobs accordingly.

Also monitor replication lag, especially after restarting the slave, to ensure it doesn’t fall too far behind.

Regards

Kamal Nasser
DigitalOcean Employee
DigitalOcean Employee badge
February 8, 2014

@KiwoT: Why do you want to disable the replication?

Hi,

is there any way to bakcup all server config (ubuntu / plesk / apache / all mods / varnish config / fail2ban / … ) ?

thanks Best regards Edouard

alexdo
Site Moderator
Site Moderator badge
August 20, 2024

As mentioned, the simplest way to backup the entire server is to take a snapshot.

Andrew SB
DigitalOcean Employee
DigitalOcean Employee badge
July 22, 2014

The simplest way to backup the entire server is to take a snapshot.

I have a web solution to make auto mysql backup easly,

Here the link : http://www.abmysql.com/

In August 2014 our technical service alert: we lost all our mysql database after a restart of your servers. After studying the case, it can happen in larger society, the best! it’s scary. Are you sure that your data bases backup function properly? Do you have a backup system for your mysql databases? We fight every day for it to not happen and it does not happen to you

If you are interested to speak about my solution, I can put your link to my website

Here the link : http://www.abmysql.com/

Thanks, BERNIMONT Thomas

alexdo
Site Moderator
Site Moderator badge
August 20, 2024

Thanks for sharing the tool with the community Thomas!

Regards

Did you know ? now www.abmysql.com,we use this web service, before we used command line but this is too hard sometimes, abmysql can help you

I just setup automysqlbackup. One small question I have is, do I need to manually add that command in cron job or it is added automatically?

alexdo
Site Moderator
Site Moderator badge
August 3, 2024

I believe the cron job needs to be added manually in the crontab.

Regards

Thanks for tutorial.

There is a little typo: “ls -R /var/lib/automysqlbackup/dailey” must be “ls -R /var/lib/automysqlbackup/daily”

Hello, on ubuntu repositories automysqlbackup is on version 2.6, how can i update to 3? thanks a lot

We are using this tool on all our servers at DO: backupbird.com

It gather all your servers in a simple dashboard and alerts you if a server fails to backup. You can see statistics on CPU usage and memory usage during backup.

It can backup MySql as well as regular server files.

It also shows the backup size over time.

alexdo
Site Moderator
Site Moderator badge
August 3, 2024

Heya,

Thanks for sharing this with the community!

I’ll also share Snapshooter as a backup solution. You can use SnapShooter to back up servers, volumes, databases, and applications from DigitalOcean and other cloud providers.

https://docs.digitalocean.com/products/snapshooter/

Regards

Thanks for the automysqlbackup setup. I never knew about that extention, but then I am new to ubuntu. It works fine.

Now all I have to do is learn how to create an automated WinSCP script to download the database and Im good to go.

I dont want to have an email server on the vps so cant use that option.

WinSCP should take a leaf from your book and create instructions for us simple folk.

Thanks again for the great informative instruction.

alexdo
Site Moderator
Site Moderator badge
August 20, 2024

I’m glad to hear that automysqlbackup is working well for you! As for automating the download of your database backups using WinSCP, I can guide you through creating a simple script to accomplish that. You can set up a WinSCP script to automatically connect to your VPS, download the backup files, and then exit. This is a great alternative if you’re not using email notifications.

option batch on
option confirm off

# Connect to your VPS
open sftp://username:password@hostname_or_ip

# Navigate to the directory containing the backups
cd /path/to/backup/directory

# Download the backups to your local machine
get *.sql C:\path\to\local\backup\directory\

# Close the connection
close

# Exit WinSCP
exit

Run the .bat file manually first to ensure everything is working correctly. If it connects, downloads the backups, and exits, then you’re all set!

Regards

This is a very helpful overview of the db backup options. I wasn’t aware that mysqlhotcopy is not recommended.

However, doesn’t any kind of automated mysql backup (automysql backup or otherwise) involve using mysqldump, which in turn, is not transaction safe?

If yes, wouldn’t the best options for fail-safe db backups, especially for web sites and applications with significant traffic/transactions, without shutting the site or mysql server down, probably be?

  • Setup master-slave replication (where the slave would act as the backup), or,
  • Install and configure Percona XtraBackup

For sites or applications with infrequent transactions or where frequent maintenance windows are fine, automated backups using mysqldump are probably okay.

It would be interesting to hear some thoughts from other users about this.

alexdo
Site Moderator
Site Moderator badge
August 20, 2024

You’re absolutely right—using mysqldump for backups can have some limitations, particularly when it comes to transaction safety and performance impact.

In a master-slave setup, changes to the master database are replicated in real-time to one or more slave databases. The slave can act as a live backup, and you can take backups from the slave to avoid load on the master.

For high-traffic websites and applications, where uptime and data consistency are critical, it’s often recommended to use Percona XtraBackup in conjunction with replication. This setup offers robust, transaction-safe backups without impacting the live environment.

As your application scales, consider moving to more sophisticated setups like multi-master replication, Galera Cluster, or distributed database solutions like Vitess for managing large-scale databases.

mysqldump has its place, especially for smaller or less critical databases, but for high-traffic, transaction-heavy applications, more sophisticated approaches like replication and Percona XtraBackup are typically recommended. These methods provide better data safety, reduce downtime, and can scale more effectively as your application grows.

Regards

This comment has been deleted

    This comment has been deleted

      As for me, the best way to backup MySQL databases on Linux is to use mysqldump. But if you are looking for tool to backup MySQL on Windows then try http://mysqlbackupftp.com/mysql-blog/how-to-backup-mysql-database/

      Does automysqlbackup work with MaríaDB? @Jellingwood

      Thanks it really help me a lot.

      " mysqldump -u username -p database_to_backup > backup_name.sql "

      Where does this save to?

      Followed the below linked script from this page and it works well for me. It has much featured options listed there. Automatically can backup by your cron job. MySql AutoBackup GZIP [https://vvcares.com/w/mysql-auto-backup-all-databases-solved/]

      Thanks, the automysqlbackup method seems to work great on Ubuntu 18.04

      Try DigitalOcean for free

      Click below to sign up and get $200 of credit to try our products over 60 days!

      Sign up

      Join the Tech Talk
      Success! Thank you! Please check your email for further details.

      Please complete your information!

      Become a contributor for community

      Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

      DigitalOcean Documentation

      Full documentation for every DigitalOcean product.

      Resources for startups and SMBs

      The Wave has everything you need to know about building a business, from raising funding to marketing your product.

      Get our newsletter

      Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.

      New accounts only. By submitting your email you agree to our Privacy Policy

      The developer cloud

      Scale up as you grow — whether you're running one virtual machine or ten thousand.

      Get started for free

      Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

      *This promotional offer applies to new accounts only.