This article covers a version of Ubuntu that is no longer supported. If you are currently operate a server running Ubuntu 12.04, we highly recommend upgrading or migrating to a supported version of Ubuntu:
Reason: Ubuntu 12.04 reached end of life (EOL) on April 28, 2017 and no longer receives security patches or updates. This guide is no longer maintained.
See Instead:
This guide might still be useful as a reference, but may not work on other Ubuntu releases. If available, we strongly recommend using a guide written for the version of Ubuntu you are using. You can use the search functionality at the top of the page to find a more recent version.
MySQL is a popular database management solution that uses the SQL querying language to access and manipulate data. It can easily be used to manage the data from websites or applications.
Backups are important with any kind of data, and this is especially relevant when talking about databases. MySQL can be backed up in a few different ways that we will discuss in this article.
For this tutorial, we will be using an Ubuntu 12.04 VPS with MySQL 5.5 installed. Most modern distributions and recent versions of MySQL should operate in a similar manner.
One of the most common ways of backing up with MySQL is to use a command called "mysqldump".
There is an article on how to export databases using mysqldump here. The basic syntax of the command is:
mysqldump -u username -p database_to_backup > backup_name.sql
To restore a database dump created with mysqldump, you simply have to redirect the file into MySQL again.
We need to create a blank database to house the imported data. First, log into MySQL by typing:
mysql -u username -p
Create a new database which will hold all of the data from the data dump and then exit out of the MySQL prompt:
CREATE DATABASE database_name;
exit
Next, we can redirect the dump file into our newly created database by issuing the following command:
mysql -u username -p database_name < backup_name.sql
Your information should now be restored to the database you've created.
You can save the data from a table directly into a text file by using the select statement within MySQL.
The general syntax for this operation is:
SELECT * INTO OUTFILE 'table_backup_file' FROM name_of_table;
This operation will save the table data to a file on the MySQL server. It will fail if there is already a file with the name chosen.
Note: This option only saves table data. If your table structure is complex and must be preserved, it is best to use another method!
There is a utility program called "automysqlbackup" that is available in the Ubuntu repositories.
This utility can be scheduled to automatically perform backups at regular intervals.
To install this program, type the following into the terminal:
sudo apt-get install automysqlbackup
Run the command by typing:
sudo automysqlbackup
The main configuration file for automysqlbackup is located at "/etc/default/automysqlbackup". Open it with administrative privileges:
sudo nano /etc/default/automysqlbackup
You can see that this file, by default, assigns many variables by the MySQL file located at "/etc/mysql/debian.cnf". This contains maintenance login information
From this file, it reads the user, password, and databases that must be backed up.
The default location for backups is "/var/lib/automysqlbackup". Search this directory to see the structure of the backups:
ls /var/lib/automysqlbackup
daily monthly weekly
If we look into the daily directory, we can see a subdirectory for each database, inside of which is a gzipped sql dump from when the command was run:
ls -R /var/lib/automysqlbackup/daily
.: database_name information_schema performance_schema ./database_name: database_name_2013-08-27_23h30m.Tuesday.sql.gz ./information_schema: information_schema_2013-08-27_23h30m.Tuesday.sql.gz ./performance_schema: performance_schema_2013-08-27_23h30m.Tuesday.sql.gz
Ubuntu installs a cron script with this program that will run it every day. It will organize the files to the appropriate directory.
It is possible to use MySQL replication to backup data with the above techniques.
Replication is a process of mirroring the data from one server to another server (master-slave) or mirroring changes made to either server to the other (master-master).
While replication allows for data mirroring, it suffers when you are trying to save a specific point in time. This is because it is constantly replicating the changes of a dynamic system.
To avoid this problem, we can either:
You can disable replication for the slave temporarily by issuing:
mysqladmin -u user_name -p stop-slave
Another option, which doesn't completely stop replication, but puts it on pause, so to speak, can be accomplished by typing:
mysql -u user_name -p -e 'STOP SLAVE SQL_THREAD;'
After replication is halted, you can backup using one of the methods above. This allows you to keep the master MySQL database online while the slave is backed up.
When this is complete, restart replication by typing:
mysqladmin -u user_name -p start-slave
You can also ensure a consistent set of data within the server by making the data read-only temporarily.
You can perform these steps on either the master or the slave systems.
First, log into MySQL with enough privileges to manipulate the data:
mysql -u root -p
Next, we can write all of the cached changes to the disk and set the system read-only by typing:
FLUSH TABLES WITH READ LOCK; SET GLOBAL read_only = ON;
Now, perform your backup using mysqldump.
Once the backup is complete, return the system to its original working order by typing:
SET GLOBAL read_only = OFF; UNLOCK TABLES;
MySQL includes a perl script for backing up databases quickly called "mysqlhotcopy". This tool can be used to quickly backup a database on a local machine, but it has limitations that make us avoid recommending it.
The most important reason we won't cover mysqlhotcopy's usage here is because it only works for data stored using the "MyISAM" and "Archive" storage engines.
Most users do not change the storage engine for their databases and, starting with MySQL 5.5, the default storage engine is "InnoDB". This type of database cannot be backed up using mysqlhotcopy.
Another limitation of this script is that it can only be run on the same machine that the database storage is kept. This prevents running backups from a remote machine, which can be a major limitation in some circumstances.
Another method sometimes suggested is simply copying the table files that MySQL stores its data in.
This approach suffers for one of the same reasons as "mysqlhotcopy".
While it is reasonable to use this technique with storage engines that store their data in files, InnoDB, the new default storage engine, cannot be backed up in this way.
There are many different methods of performing backups in MySQL. All have their benefits and weaknesses, but some are much easier to implement and more broadly useful than others.
The backup scheme you choose to deploy will depend heavily on your individual needs and resources, as well as your production environment. Whatever method you decide on, be sure to validate your backups and practice restoring the data, so that you can be sure that the process is functioning correctly.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
I would highly suggest using the --single-transaction parameter on large databases, otherwise INSERTS and UPDATE will wait until the current table is done being dump, and if you run a high transaction database those connections will just pile up and then you’ll get the infamous “too many connections” error
But, doesn’t this apply only to InnoDB tables? As per the MySQL docs, this parameter will not help dump either MyISAM or the MySQL memory tables in a consistent state. So, this may not be helpful for web applications that use both InnoDB and MyISAM tables - Magento is a good example.
You’re absolutely right. When dealing with large databases and high-transaction environments, using the
--single-transaction
parameter withmysqldump
is crucial. This option helps ensure that your dump is consistent without locking tables, which is especially important for busy databases where long-running locks can lead to performance issues and “too many connections” errors.The
--single-transaction
option instructsmysqldump
to perform a backup in a consistent state using a single transaction. This means that all tables are backed up in a consistent snapshot of the database at the start of the transaction, avoiding the need to lock tables for the duration of the dump.Regards
For large, high-transaction databases, the following command is recommended:
E.g
Regards
May I recommend Xtrabackup from Percona. http://www.percona.com/doc/percona-xtrabackup/2.1/
Percona XtraBackup is indeed a powerful tool for performing hot backups of MySQL and MariaDB databases.
Unlike
mysqldump
, which locks tables and can be slower for large databases, XtraBackup allows for non-blocking, consistent backups of InnoDB, MyISAM, and other MySQL storage engines without disrupting the ongoing transactions. This makes it particularly valuable for high-traffic databases or those requiring high availability.Regards
Its important uses ‘–routines’ arg for mysqldump exports all triggers, views, procedures and functions of your database.
mysqldump -u root -p --routines … > out.sql
Heya,
Including the
--routines
argument in themysqldump
command is crucial for backing up not just the data and table structures, but also the stored procedures, functions, triggers, and views associated with your database. This ensures a complete backup of all the logical components that your application might depend on.Regards
My extracted back up files are empty. Did I miss something?
If your extracted backup files are empty, there could be a few possible reasons
Ensure that the
mysqldump
command in your script is correct and includes the appropriate flags for the data you want to back up. Also check the size of the.sql
file before and after compression (if applicable). If the uncompressed file size is very small or 0 bytes, the dump likely failed.Regards
@ Milo Felipe,
Can i have the mysqldump command ? Did you check your current DB size ? so that i can give a suggestion.
“For larger databases, where mysqldump would be impractical or inefficient, you can back up the raw data files instead”
http://dev.mysql.com/doc/mysql-backup-excerpt/5.6/en/replication-solutions-backups.html
I am going to add a 1GB slave server, where I will make my backups.
Can I by cron job disable the replication on the slave server, then do a mysqlDump?
Yes, you can temporarily stop the replication on your MySQL slave server to perform a
mysqldump
, and then restart the replication afterward. This method ensures that your backup is consistent and that there are no conflicts between the ongoing replication process and the dump operation.I’ll recommend to add checks if the SLAVE is running before stopping it and also check if SLAVE has started after the
START SLAVE
command.Ensure that there is enough disk space for both the raw backup and the compressed file.
Running
mysqldump
on large databases can still be resource-intensive, so plan the timing of these jobs accordingly.Also monitor replication lag, especially after restarting the slave, to ensure it doesn’t fall too far behind.
Regards
@KiwoT: Why do you want to disable the replication?
Hi,
is there any way to bakcup all server config (ubuntu / plesk / apache / all mods / varnish config / fail2ban / … ) ?
thanks Best regards Edouard
As mentioned, the simplest way to backup the entire server is to take a snapshot.
The simplest way to backup the entire server is to take a snapshot.