There are many scenarios in which you might have to move your data and operating requirements from one server to another. You may need to implement your solutions in a new datacenter, upgrade to a larger machine, or transition to new hardware or a new VPS provider.
Whatever your reasons, there are many different considerations you should make when migrating from one system to another. Getting functionally equivalent configurations can be difficult if you are not working with a configuration management solution such as Chef, Puppet, or Ansible. You need to not only transfer data, but also configure your services to operate in the same way on a new machine.
In the last article in this series, you covered how to transfer packages and other data with rsync. In this tutorial, you will complete your migration by migrating users, groups, crontabs, and other settings.
Linux package managers are very powerful and reproducible, and by migrating your system packages in the previous tutorial, you will have migrated most of the necessary configuration settings needed. However, this omits some of the settings you may have changed manually on your old server, such as user and group permissions. These will need to be migrated or recreated as well.
Fortunately, all of Linux’s users and group settings are contained within a few files. These files include:
/etc/passwd: This file defines users and their attributes. Despite its name, this file does not contain password information. It includes username, user and primary group numbers, home directories, and default shells.
/etc/shadow: This file contains the actual password settings for each user. It should contain a line for each of the users defined in the passwd
file, along with a hash of their password and some information about password policies.
/etc/group: This file defines each group available on your system. This includes the group name and the associated group number, along with any group memberships.
/etc/gshadow: This file contains a line for each group on the system. It lists a group’s name, a password that can be used by non-group members to access the group, a list of administrators, and other members.
You should never copy these files directly from one live system to another. User and group numbers are automatically incremented when they are created on each system, and they will create conflicts if they do not match. Instead, you can migrate these selectively using awk
, as in the previous tutorial.
You’ll create a new migration file associated with each of the above files. This will let you migrate them all systematically, starting with /etc/passwd
.
First, you’ll need to establish whether regular user IDs begin counting from 500 or from 1000 on your source system. Most modern Linux environments begin counting from 1000 to reserve more room for system users, but if you are migrating from a very old system, it may count from 500. To check, you can print the last lines of your /etc/passwd
file to see what your own user account number is:
- less /etc/passwd
Output…
vault:x:997:997::/home/vault:/bin/bash
stunnel4:x:112:119::/var/run/stunnel4:/usr/sbin/nologin
sammy:x:1001:1002::/home/sammy:/bin/sh
In this case, it would be 1000, since your regular user IDs, in the third column of the output, seem to be 1000 or greater. We won’t be exporting users or groups below this limit. You will also exclude the nobody
account that is automatically assigned an ID of 65534
.
Using awk
, you can create a sync file for your /etc/passwd
file. The awk
commands in this tutorial will be provided as-is, due to its complex syntax, but remember you can learn more about using awk in another tutorial.
- awk -v LIMIT=1000 -F: '($3>=LIMIT) && ($3!=65534)' /etc/passwd > ~/passwd.sync
Next, you can use the same syntax and same user ID limit to export your /etc/group
file:
- awk -v LIMIT=1000 -F: '($3>=LIMIT) && ($3!=65534)' /etc/group > ~/group.sync
To parse the /etc/shadow
, you can use the data from your /etc/passwd
file as input:
- awk -v LIMIT=1000 -F: '($3>=LIMIT) && ($3!=35534) {print $1}' /etc/passwd | tee - | egrep -f - /etc/shadow > ~/shadow.sync
The same approach works for /etc/gshadow
:
- awk -v LIMIT=1000 -F: '($3>=LIMIT) && ($3!=65534) {print $1}' /etc/group | tee - | egrep -f - /etc/gshadow > ~/gshadow.sync
After you’ve tested these commands and verified that they create export files from real data, you can add them to the sync.sh
script you’ve been maintaining from the last tutorial. You can run each of these commands remotely — that is, as part of the script that executes on your target machine, by getting output from the original, source machine — by preceding them with ssh source_server
, and wrapping the of the awk
command in quotes.
ssh source_server "awk -v LIMIT=1000 -F: '($3>=LIMIT) && ($3!=65534)' /etc/passwd > ~/passwd.sync"
ssh source_server "awk -v LIMIT=1000 -F: '($3>=LIMIT) && ($3!=65534)' /etc/group > ~/group.sync"
ssh source_server "awk -v LIMIT=1000 -F: '($3>=LIMIT) && ($3!=35534) {print $1}' /etc/passwd | tee - | egrep -f - /etc/shadow > ~/shadow.sync"
ssh source_server "awk -v LIMIT=1000 -F: '($3>=LIMIT) && ($3!=65534) {print $1}' /etc/group | tee - | egrep -f - /etc/gshadow > ~/gshadow.sync"
rsync source_server:~/passwd.sync ~/
rsync source_server:~/group.sync ~/
rsync source_server:~/shadow.sync ~/
rsync source_server:~/gshadow.sync ~/
After exporting this data to your target machine, you can automatically add your users and groups to the target machine. Unlike the other commands, though, this one will create duplicates if it is re-run in the same environment, so you should perform this manually rather than adding it to your migration script.
There is a command called newusers
that can add multiple users from an input file. First, however, you’ll want to use another awk
command to remove the numeric IDs from your sync file:
- awk 'BEGIN { OFS=FS=":"; } {$3=""; $4=""; } { print; }' ~/passwd.sync > ~/passwd.sync.mod
Then you can pass that file to newusers
:
- newusers ~/passwd.sync.mod
This will add all of the users from the file to the local /etc/passwd
file. It will also create the associated user groups automatically. You will have to manually have to add additional groups that aren’t associated with a user to the /etc/group
file. Use your sync files as a point of reference to edit the corresponding target files.
For the /etc/shadow
file, you can copy the second column from your shadow.sync
file into the second column of the associated account in the new system. This will transfer the passwords for your accounts to the new system. You can script these changes as well, depending on how many accounts you need to transfer.
Now that your users, packages, and other data are transferred from the old system, there’s one more step — to transfer each of your users’ system jobs and mail.
The /var/spool
directory on Linux officially “contains data which is awaiting some kind of later processing.” In practice, this usually includes any cron jobs that you have configured, as well as any mail that is handled by the system itself. Although you may not actually be an running an email server locally, Linux’s internal concept of “mail” also includes logs and notifications from some software, so it is important to ensure these are captured as well.
You can begin this process by writing another rsync command for the spool directory. The spool
directory normally contains cron
, mail
, and some other logs:
- ls /var/spool
Outputanacron cron mail plymouth rsyslog
To transfer the mail directory to our target server, you can add another rsync
command to your migration script:
rsync -azvP --progress source_server:/var/spool/mail/* /var/spool/mail/
Another directory within the /var/spool
directory that you should pay attention to is the cron
directory. This directory contains cron jobs, which are used for scheduling tasks. The crontabs
subdirectory contains each individual user’s cron
configuration.
Transfer your crontabs
using rsync
:
rsync -azvP --progress source_server:/var/spool/cron/crontabs/* /var/spool/cron/crontabs/*
This will handle individual users’ cron
configurations. However, it does not capture any system-wide cron
settings. Within the /etc
directory, there is a system-wide crontab and a number of other directories that contain cron
settings.
- ls /etc/cron*
Outputcron.d
cron.daily
cron.hourly
cron.monthly
crontab
cron.weekly
The crontab
file contains system-wide cron
details. The other items are directories that contain other cron information. Look into them and decide if they contain any information you need.
Once again, use rsync
to transfer the relevant cron information to the new system:
- rsync -azvP --progress source_server:/etc/crontab /etc/crontab
While you are investigating the cron
configurations in /etc
, make sure you haven’t overlooked any other configuration files. For example, the Nginx web server stores its configuration in /etc/nginx
, and you should ensure that this has been captured by your migration script.
Once you have your cron information on your new system, you should verify that it works. The only way of doing this correctly is to log in as each individual user and run the commands in each user’s crontab manually. This will make sure that there are no permissions issues or missing file paths that would prevent these commands from silently failing when running automatically.
At this point, you should be done adding commands to your migration script and transferring data. The next step is to begin restarting all of your relevant services on the new server. For example, you can restart the nginx
web server by running sudo systemctl restart nginx
, although this will have been done automatically when you installed the Nginx package on the new server. For any other services, which you may have written your own unit files for, or deployed via Docker, you should test restarting them manually. You should also reboot your server at least once to ensure that these services can resume properly following any downtime. Pay attention to any associated log files as you’re testing to see if any issues come up.
You can perform some other spot checks as well. For instance, if you have a /data
directory that you’ve transferred with rsync
, you could navigate to that directory on both the source and target computers and run the du
command to verify its size:
- cd /data
- du -hs
Output471M .
If there is a disparity between your two systems, you should investigate.
Next, you can check the processes that are running on each machine. You can use top
to get an overview of active processes:
- top
Outputtop - 21:20:33 up 182 days, 22:04, 1 user, load average: 0.00, 0.01, 0.00
Tasks: 124 total, 3 running, 121 sleeping, 0 stopped, 0 zombie
%Cpu(s): 1.0 us, 1.0 sy, 0.0 ni, 98.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 981.3 total, 82.8 free, 517.8 used, 380.7 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 182.0 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
11 root 20 0 0 0 0 I 0.3 0.0 29:45.20 rcu_sched
99465 root 20 0 685508 27396 5372 S 0.3 2.7 161:41.83 node /root/hell
104207 vault 20 0 837416 236528 128012 S 0.3 23.5 134:53.49 vault
175635 root 20 0 11000 3824 3176 R 0.3 0.4 0:00.03 top
1 root 20 0 170636 9116 4200 S 0.0 0.9 8:50.40 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:01.04 kthreadd
3 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_gp
4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_par_gp
. . .
You also can replicate some of the checks that you did initially on the source machine to see if you have correctly reproduced your environment on the new machine. You can once again run netstat
with the -nlp
flags to get an overview:
- netstat -nlp
OutputActive Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:8200 0.0.0.0:* LISTEN 104207/vault
tcp 0 0 0.0.0.0:1935 0.0.0.0:* LISTEN 3691671/nginx: mast
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 3691671/nginx: mast
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 3691671/nginx: mast
tcp 0 0 0.0.0.0:1936 0.0.0.0:* LISTEN 197885/stunnel4
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 162540/systemd-reso
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 129518/sshd: /usr/s
tcp 0 0 127.0.0.1:3000 0.0.0.0:* LISTEN 99465/node /root/he
tcp 0 0 0.0.0.0:8088 0.0.0.0:* LISTEN 3691671/nginx: mast
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 3691671/nginx: mast
tcp 0 0 0.0.0.0:56733 0.0.0.0:* LISTEN 170269/docker-proxy
tcp6 0 0 :::80 :::* LISTEN 3691671/nginx: mast
tcp6 0 0 :::22 :::* LISTEN 129518/sshd: /usr/s
tcp6 0 0 :::443 :::* LISTEN 3691671/nginx: mast
tcp6 0 0 :::56733 :::* LISTEN 170275/docker-proxy
udp 0 0 127.0.0.53:53 0.0.0.0:* 162540/systemd-reso
raw6 0 0 :::58 :::* 7 162524/systemd-netw
raw6 0 0 :::58 :::* 7 162524/systemd-netw
Active UNIX domain sockets (only servers)
Proto RefCnt Flags Type State I-Node PID/Program name Path
unix 2 [ ACC ] STREAM LISTENING 5313074 1/systemd /run/systemd/userdb/io.systemd.DynamicUser
unix 2 [ ACC ] SEQPACKET LISTENING 12985 1/systemd /run/udev/control
unix 2 [ ACC ] STREAM LISTENING 12967 1/systemd /run/lvm/lvmpolld.socket
unix 2 [ ACC ] STREAM LISTENING 12980 1/systemd /run/systemd/journal/stdout
unix 2 [ ACC ] STREAM LISTENING 16037236 95187/systemd /run/user/0/systemd/private
…
You can also re-run lsof
:
- lsof
OutputCOMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
node\x20/ 99465 root 20u IPv4 16046039 0t0 TCP 127.0.0.1:3000 (LISTEN)
vault 104207 vault 8u IPv4 1134285 0t0 TCP *:8200 (LISTEN)
sshd 129518 root 3u IPv4 1397496 0t0 TCP *:22 (LISTEN)
sshd 129518 root 4u IPv6 1397507 0t0 TCP *:22 (LISTEN)
systemd-r 162540 systemd-resolve 12u IPv4 5313507 0t0 UDP 127.0.0.53:53
systemd-r 162540 systemd-resolve 13u IPv4 5313508 0t0 TCP 127.0.0.53:53 (LISTEN)
docker-pr 170269 root 4u IPv4 1700561 0t0 TCP *:56733 (LISTEN)
docker-pr 170275 root 4u IPv6 1700573 0t0 TCP *:56733 (LISTEN)
stunnel4 197885 stunnel4 9u IPv4 1917328 0t0 TCP *:1936 (LISTEN)
sshd 3469804 root 4u IPv4 22246413 0t0 TCP 159.203.102.125:22->154.5.29.188:36756 (ESTABLISHED)
nginx 3691671 root 7u IPv4 2579911 0t0 TCP *:8080 (LISTEN)
nginx 3691671 root 8u IPv4 1921506 0t0 TCP *:80 (LISTEN)
nginx 3691671 root 9u IPv6 1921507 0t0 TCP *:80 (LISTEN)
nginx 3691671 root 10u IPv6 1921508 0t0 TCP *:443 (LISTEN)
nginx 3691671 root 11u IPv4 1921509 0t0 TCP *:443 (LISTEN)
nginx 3691671 root 12u IPv4 2579912 0t0 TCP *:8088 (LISTEN)
nginx 3691671 root 13u IPv4 2579913 0t0 TCP *:1935 (LISTEN)
nginx 3691674 www-data 7u IPv4 2579911 0t0 TCP *:8080 (LISTEN)
nginx 3691674 www-data 8u IPv4 1921506 0t0 TCP *:80 (LISTEN)
nginx 3691674 www-data 9u IPv6 1921507 0t0 TCP *:80 (LISTEN)
nginx 3691674 www-data 10u IPv6 1921508 0t0 TCP *:443 (LISTEN)
nginx 3691674 www-data 11u IPv4 1921509 0t0 TCP *:443 (LISTEN)
nginx 3691674 www-data 12u IPv4 2579912 0t0 TCP *:8088 (LISTEN)
nginx 3691674 www-data 13u IPv4 2579913 0t0 TCP *:1935 (LISTEN)
If you transferred a web server or web-facing applications, you should also test your sites on the new server. Depending on your configuration, you may need to migrate your domain name and re-register HTTPS certificates before you can do that. If your new server is behind a VPN or another ingress layer, you may be able to test it behind a different URL before performing a full production cutover.
You’ll also want to migrate your firewall rules, which are usually contained in /etc/sysconfig/iptables
and /etc/sysconfig/ip6tables
.
Prior to loading the rules into your new server, you should review them for anything that needs to be updated, such as changed IP addresses or ranges.
Once you have all of the newest data on your target server and you’ve tested your web endpoints, you can modify the DNS servers for your domain to point to your new server. Make sure that every reference to the old server’s IP is replaced with the new server’s information.
If you are using DigitalOcean’s DNS servers, you can read about how to configure your domain names.
DNS changes typically take anywhere from a few minutes to an hour to propagate to most home internet ISPs. After your DNS has updated to reflect your changes, you may have to run the migration script a final time to make sure that any stray requests that were still going to your original server are transferred.
Your new server should now be up and running, accepting requests and handling all of the data that was on your previous server. You should continue to closely monitor the new server for any anomalies.
Migrations are not trivial. The best chance of successfully migrating a live server is to understand your system as best as you can before you begin. Every system is different and each time, you will have to work around new issues. Do not attempt to migrate if you do not have time to troubleshoot issues that may arise.
Next, you may want to learn about configuring and deploying servers with Ansible.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
Migrating to a new server can be a complex and involved task. Not only do you have to transfer the data itself to a new location, you also have to replicate the service environment and ensure that your components interact as you expect them to.
In this series, we will take you through the steps needed to migrate an existing installation to a new server. Follow along to start developing your migration plan.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Excellent to use, I implemented to RHEL 6 to RHEL 9 SFTP SERVER.
Thanks to Admin.
Good summary and thanks for the info