There are many scenarios in which you might have to move your data and operating requirements from one server to another. You may need to implement your solutions in a new datacenter, upgrade to a larger machine, or transition to new hardware or a new VPS provider.
Whatever your reasons, there are many different considerations you should make when migrating from one system to another. Getting functionally equivalent configurations can be difficult if you are not working with a configuration management solution such as Chef, Puppet, or Ansible. You need to not only transfer data, but also configure your services to operate in the same way on a new machine.
Note: As a general note, modern deployments should always make use of a configuration management system wherever possible, whether they are designed to be transient Kubernetes nodes or they are running a combination of system services and containerized software. This guide will be primarily useful when this is not the case, and services need to be cataloged and migrated manually.
In this guide, you will review how to prepare your source and target systems for a migration. This will include getting your two machines to communicate with SSH keys, and an investigation into which components need to be transferred. You will begin the actual migration in the next article in this series.
The first step to take when performing any potentially destructive action is to create fresh backups. You don’t want to be left in a situation where a command breaks something on your current production machine before the replacement is up and running.
There are a number of different ways to back up your server. Your selection will depend on what options make sense for your scenario and what you are most comfortable with.
If you have access to the physical hardware and a space to backup (disk drive, USB, etc), you can clone the disk using any one of the many image backup solutions available. A functional equivalent when dealing with cloud servers is to take a snapshot or image from within the control panel interface.
Once you have completed backups, you are ready to continue. For the remainder of this guide, you will need to run many of the commands as root, or by using sudo
.
Before you begin a migration, you should configure your target system to match your source system.
You will want to match as much as you can between the current server and the one you plan on migrating to. You may want to upgrade your current server before migrating to a newer target system, and making another set of backups afterward. The important thing is that they match as closely as possible when starting the actual migration.
Most of the information that will help you decide which server system to create for the new machine can be retrieved with the uname
command:
- uname -r
Output5.4.0-26-generic
This is the version of the kernel that your current system is running. In order to make things go smoothly, it’s a good idea to try to match that on the target system.
You should also try to match the distribution and version of your source server. If you don’t know the version of the distribution that you have installed on the source machine, you can find out by typing:
- cat /etc/issue
OutputUbuntu 20.04.3 LTS \n \l
You should create your new server with these same parameters if possible. In this case, you would create a Ubuntu 20.04 system. You should also try to match the kernel version as closely as possible. Usually, this should be the most up-to-date kernel available from your Linux distro’s repositories.
You’ll need your servers to be able to communicate so that they can transfer files. In order to do this, you should exchange SSH keys between them. You can learn how to configure SSH keys on a Linux server.
You’ll need to create a new key on your target server so that you can add that to your existing server’s authorized_keys
file. This is cleaner than the other way around, because this way, the new server will not have a stray key in its authorized_keys
file when the migration is complete.
First, on your destination machine, check that your root user doesn’t already have an SSH key by typing:
- ls ~/.ssh
Outputauthorized_keys
If you see files called id_rsa.pub
and id_rsa
, then you already have keys and you’ll just need to transfer them.
If you don’t see those files, create a new key pair using ssh-keygen
:
- ssh-keygen -t rsa
Press “Enter” through all of the prompts to accept the defaults.
Now, you can transfer the key to the source server by piping it through ssh
:
- cat ~/.ssh/id_rsa.pub | ssh other_server_ip "cat >> ~/.ssh/authorized_keys"
You should now be able to SSH freely to your source server from the target system without providing a password:
- ssh other_server_ip
This will make any further migration steps go much more smoothly.
Now you’ll actually be doing some in-depth analysis of your system.
During the course of operations, your software requirements can change. Sometimes old servers have some services and software that were needed at one point, but have been replaced.
In general, unneeded services can be disabled and, if completely unnecessary, uninstalled, but taking stock of them can be time-consuming. You’ll need to discover what services are being used on your source server, and then decide if those services should exist on your new server.
The way that you discover services and runlevels depends on the type of “init” system that your server employs. The init system is responsible for starting and stopping services, either at the user’s command or automatically. From about 2014 onward, almost all major Linux distributions adopted an init system called Systemd, and this guide will reflect Systemd.
In order to list that services that are registered with Systemd, you can use the systemctl
command:
- systemctl list-units -t service
Output UNIT LOAD ACTIVE SUB DESCRIPTION >
accounts-daemon.service loaded active running Accounts Service >
apparmor.service loaded active exited Load AppArmor profiles >
apport.service loaded active exited LSB: automatic crash repor>
atd.service loaded active running Deferred execution schedul>
blk-availability.service loaded active exited Availability of block devi>
cloud-config.service loaded active exited Apply the settings specifi>
cloud-final.service loaded active exited Execute cloud user/final s>
cloud-init-local.service loaded active exited Initial cloud-init job (pr>
cloud-init.service loaded active exited Initial cloud-init job (me>
console-setup.service loaded active exited Set console font and keyma>
containerd.service loaded active running containerd container runti>
…
For its service management, Systemd implements a concept of “targets”. While systems with traditional init systems could only be in one “runlevel” at a time, a server that uses Systemd can reach several targets concurrently. This is more flexible in practice, but figuring out what services are active can be more difficult.
You can see which targets are currently active by typing:
- systemctl list-units -t target
Output UNIT LOAD ACTIVE SUB DESCRIPTION
basic.target loaded active active Basic System
cloud-config.target loaded active active Cloud-config availability
cloud-init.target loaded active active Cloud-init target
cryptsetup.target loaded active active Local Encrypted Volumes
getty.target loaded active active Login Prompts
graphical.target loaded active active Graphical Interface
local-fs-pre.target loaded active active Local File Systems (Pre)
local-fs.target loaded active active Local File Systems
multi-user.target loaded active active Multi-User System
network-online.target loaded active active Network is Online
…
You can list all available targets by typing:
- systemctl list-unit-files -t target
OutputUNIT FILE STATE VENDOR PRESET
basic.target static enabled
blockdev@.target static enabled
bluetooth.target static enabled
boot-complete.target static enabled
cloud-config.target static enabled
cloud-init.target enabled-runtime enabled
cryptsetup-pre.target static disabled
cryptsetup.target static enabled
ctrl-alt-del.target disabled enabled
…
From here, you can find out which services are associated with each target. Targets can have services or other targets as dependencies, so you can see which policies each target implements by typing:
- systemctl list-dependencies target_name.target
multi-user.target
is a commonly used target on Systemd servers that is reached at the point in the startup process when users are able to log in. For instance, you might type something like this:
- systemctl list-dependencies multi-user.target
Outputmulti-user.target
● ├─apport.service
● ├─atd.service
● ├─console-setup.service
● ├─containerd.service
● ├─cron.service
● ├─dbus.service
● ├─dmesg.service
● ├─docker.service
● ├─grub-common.service
● ├─grub-initrd-fallback.service
…
This will list the dependency tree of that target, giving you a list of services and other targets that are started when that target is reached.
While most services configured by your package manager will be registered with the init system, some other software, such as Docker deployments, may not be.
You can try to find these other services and processes by looking at the network ports and Unix sockets being used by these services. In most cases, services communicate with each other or outside entities in some way. There are only a certain number of server interfaces on which services can communicate, and checking those interfaces is a good way to spot other services.
One tool that you can use to discover network ports and in-use Unix sockets is netstat
. You can run netstat
with the -nlp
flags in order to get an overview:
- netstat -nlp
-n
specifies that numerical IP addresses should be shown in the output, rather than hostnames or usernames. When checking a local server, this is usually more informative.-l
specifies that netstat
should only display actively listening sockets.-p
displays the process ID (PID
) and the name of each process using the port or socket.OutputActive Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:8200 0.0.0.0:* LISTEN 104207/vault
tcp 0 0 0.0.0.0:1935 0.0.0.0:* LISTEN 3691671/nginx: mast
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 3691671/nginx: mast
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 3691671/nginx: mast
tcp 0 0 0.0.0.0:1936 0.0.0.0:* LISTEN 197885/stunnel4
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 162540/systemd-reso
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 129518/sshd: /usr/s
tcp 0 0 127.0.0.1:3000 0.0.0.0:* LISTEN 99465/node /root/he
tcp 0 0 0.0.0.0:8088 0.0.0.0:* LISTEN 3691671/nginx: mast
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 3691671/nginx: mast
tcp 0 0 0.0.0.0:56733 0.0.0.0:* LISTEN 170269/docker-proxy
tcp6 0 0 :::80 :::* LISTEN 3691671/nginx: mast
tcp6 0 0 :::22 :::* LISTEN 129518/sshd: /usr/s
tcp6 0 0 :::443 :::* LISTEN 3691671/nginx: mast
tcp6 0 0 :::56733 :::* LISTEN 170275/docker-proxy
udp 0 0 127.0.0.53:53 0.0.0.0:* 162540/systemd-reso
raw6 0 0 :::58 :::* 7 162524/systemd-netw
raw6 0 0 :::58 :::* 7 162524/systemd-netw
Active UNIX domain sockets (only servers)
Proto RefCnt Flags Type State I-Node PID/Program name Path
unix 2 [ ACC ] STREAM LISTENING 5313074 1/systemd /run/systemd/userdb/io.systemd.DynamicUser
unix 2 [ ACC ] SEQPACKET LISTENING 12985 1/systemd /run/udev/control
unix 2 [ ACC ] STREAM LISTENING 12967 1/systemd /run/lvm/lvmpolld.socket
unix 2 [ ACC ] STREAM LISTENING 12980 1/systemd /run/systemd/journal/stdout
unix 2 [ ACC ] STREAM LISTENING 16037236 95187/systemd /run/user/0/systemd/private
…
netstat
output contains two separate blocks — one for network ports, and one for sockets. If you see services here that you do not have information about through the init system, you’ll have to figure out why that is and whether you intend to migrate those services as well.
You can get similar information about the ports that services are making available by using the lsof
command:
- lsof
OutputCOMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
node\x20/ 99465 root 20u IPv4 16046039 0t0 TCP 127.0.0.1:3000 (LISTEN)
vault 104207 vault 8u IPv4 1134285 0t0 TCP *:8200 (LISTEN)
sshd 129518 root 3u IPv4 1397496 0t0 TCP *:22 (LISTEN)
sshd 129518 root 4u IPv6 1397507 0t0 TCP *:22 (LISTEN)
systemd-r 162540 systemd-resolve 12u IPv4 5313507 0t0 UDP 127.0.0.53:53
systemd-r 162540 systemd-resolve 13u IPv4 5313508 0t0 TCP 127.0.0.53:53 (LISTEN)
docker-pr 170269 root 4u IPv4 1700561 0t0 TCP *:56733 (LISTEN)
docker-pr 170275 root 4u IPv6 1700573 0t0 TCP *:56733 (LISTEN)
stunnel4 197885 stunnel4 9u IPv4 1917328 0t0 TCP *:1936 (LISTEN)
sshd 3469804 root 4u IPv4 22246413 0t0 TCP 159.203.102.125:22->154.5.29.188:36756 (ESTABLISHED)
nginx 3691671 root 7u IPv4 2579911 0t0 TCP *:8080 (LISTEN)
nginx 3691671 root 8u IPv4 1921506 0t0 TCP *:80 (LISTEN)
nginx 3691671 root 9u IPv6 1921507 0t0 TCP *:80 (LISTEN)
nginx 3691671 root 10u IPv6 1921508 0t0 TCP *:443 (LISTEN)
nginx 3691671 root 11u IPv4 1921509 0t0 TCP *:443 (LISTEN)
nginx 3691671 root 12u IPv4 2579912 0t0 TCP *:8088 (LISTEN)
nginx 3691671 root 13u IPv4 2579913 0t0 TCP *:1935 (LISTEN)
nginx 3691674 www-data 7u IPv4 2579911 0t0 TCP *:8080 (LISTEN)
nginx 3691674 www-data 8u IPv4 1921506 0t0 TCP *:80 (LISTEN)
nginx 3691674 www-data 9u IPv6 1921507 0t0 TCP *:80 (LISTEN)
nginx 3691674 www-data 10u IPv6 1921508 0t0 TCP *:443 (LISTEN)
nginx 3691674 www-data 11u IPv4 1921509 0t0 TCP *:443 (LISTEN)
nginx 3691674 www-data 12u IPv4 2579912 0t0 TCP *:8088 (LISTEN)
nginx 3691674 www-data 13u IPv4 2579913 0t0 TCP *:1935 (LISTEN)
Both netstat
and lsof
are core Linux process management tools that are useful in a variety of other contexts.
At this point, you should have a good idea about what services are running on your source machine that you should be implementing on your target server.
You should have a list of services that you know you will need to implement. For the transition to go smoothly, it is important to attempt to match versions wherever possible.
You shouldn’t necessarily try to review every single package installed on the source system and attempt to replicate it on the new system, but you should check the software components that are important for your needs and try to find their version number.
You can try to get version numbers from the software itself, sometimes by passing -v
or --version
flags to each command, but this is more straightforward to do through your package manager. If you are on an Ubuntu/Debian based system, you can see which version of a package is installed using the dpkg
command:
- dpkg -l | grep package_name
If you are instead on a Rocky Linux, RHEL, or Fedora-based system, you can use rpm
for the same purpose:
- rpm -qa | grep package_name
This will give you a good idea of the package version that you want to match. Make sure to retain the version numbers of any relevant software.
You should now have a good idea of what processes and services on your source server need to be transferred over to your new machine. You should also have the preliminary steps completed to allow your two servers to communicate with each other.
The groundwork for your migration is now complete. In the next article in this series, you will begin the actual migration process.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
Migrating to a new server can be a complex and involved task. Not only do you have to transfer the data itself to a new location, you also have to replicate the service environment and ensure that your components interact as you expect them to.
In this series, we will take you through the steps needed to migrate an existing installation to a new server. Follow along to start developing your migration plan.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
What if I have different issue like production has Ubuntu 10.04.4 that will be migrated to 14.04?
Great hosting AND great tutorial! All the commands you need in one place.