I’m trying to import my Wordpress database into my droplet. (512mb small droplet)
I’m particularly having trouble with one of the tables, it’s 1.6gb table filled with image blobs.
When i run the usual mysql -user -p wordpress_database < wordpress_dump.sql
It runs fine until it hits that large table, then a few seconds later the Mysql serve “goes away” (crashes?!?) and I can’t connect to it for over 10 mins.
And I hit a timeout if I try to import that specific table using MyPHPAdmin.
Is their a way of importing that large table in the small droplet?
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
@s0litaire
Your mileage may vary with this one, though one method you may be able to use without manually breaking down each table is
split
. This command will allow you to break your SQL dump down in to multiple smaller SQL files which you can then pipe to MySQL.To test this, I created a dummy single-table database called
posts
which was 2.9GB worth of various data, dumped it to an SQL file calledposts.sql
and then transferred it over to a 512MB Droplet with MariaDB installed (stock installation, no configuration changes/optimizations).On the 512MB Droplet, I created
/usr/local/src/db
using:Pulled the
posts.sql
dump file over to that directory usingwget
, so the file is:I then created a
splits
directory:and ran:
That splits the
posts.sql
file in to 250 chunks, places it in thesplits
directory and tags it withsql_
, so each split looks likesql_a....
. You could technically split it down even more, but I’m not sure how much it’d help.You can run
ls /usr/local/src/db/splits
to confirm the count.Once you have the splits, you can run:
And that’ll pipe all of the data to the database. It’s not super-fast (at all), but short of manually breaking the database down to chunks of tables, it’s one option that I can confirm worked on a 2.9GB table.
The table was created and filled using:
Where
....
is quite a fewinsert
statements (to the tune of 2.9GB) - roughly 17 million.That said, it did cut it close. This knocked the free RAM on the Droplet down to about 15MB (with 272MB being cache) and a consistent load of about 1.0 on the CPU (thus maxing it – the load mainly comes from I/O wait time), but it didn’t crash, fail, error out, etc.
That said, again, it’s slow. You may be waiting an hour or more using this method, so don’t plan on it being as fast as it would be using the normal restoration methods.
If you’re running anything else on the Droplet (Apache/NGINX, PHP-FPM, etc), I’d stop the processes first and only have MySQL running while using this method.
General note, if you’re not using Ubuntu’s
snapd
service, you can free up a little ram by disabling it.snapd
allows you to install certain packages provided by Ubuntu, much likeapt
, but for most, it’s not needed nor a critical service. If you don’t use it, or you’ve never heard of it, you can disable it without any issues.On my 512MB Droplet it was using about 10MB of RAM with the ability to use as much as 200MB. Not really something I’d keep running on a memory constrained system.
@s0litaire
Setting up
swap
on a VPS (Droplet) isn’t recommended (by DigitalOcean), so there’s a few options. The first would be upgrading your Droplet so that you have more RAM available. With a growing DB size, upgrading would probably be more beneficial now and in the long run.This is confirmed in the guides provided by DigitalOcean (i.e. to not use
swap
), such as the one here. Check the red box at the start of the guide.The second option would be to optimize MySQL’s configuration to suit your needs. The default config for MySQL isn’t exactly production ready or intended for long-term use – it’s a starting point. The issue you may face is that you don’t have a lot of RAM to work with, so when modifying
key_buffer
and other configuration options, you may run in to more issues that you’re having now. Tuning MySQL or MariaDB on low-RAM VPS’s or servers (in general) takes time and testing.The reason you’re seeing timeouts when using phpMyAdmin is due to upload and post size limits that are defined in PHP’s
php.ini
file. If you’re using NGINX,client_body_size
also plays a role and with it set, the limits, even if substantially raised inphp.ini
don’t matter – NGINX takes precedence (if you’re not using NGINX, thenclient_body_size
doesn’t matter here).Optimise your services to use less memory, add SWAP or upgrade to a bigger droplet.