April 2024
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
2930  

Categories

April 2024
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
2930  

How to Improve rsync Performance

I need to transfer 10TB of data from one machine to another machine. Those 10TB of files are living in a large RAID which span across 7 different disks. The target machine has another large RAID which span across 12 different disks. It is not easy to copying those files locally. Therefore, I decide to copy the files over the LAN.

There are four options popping up in my head: scprsyncrsyncd (rsync as daemon) and netcat.

scp

scp is handy, easy to use but comes with two disadvantages: slow and not fault-tolerant. Since scp comes with the highest security, all data are encrypted before the transfer. It will slow down the overall performance because of the extra encryption stuffs (which makes the data larger), and extra computational resource (which uses more CPU). If the transfer is interrupted, there is no easy way to resume the process other than transferring everything again. Here are some example commands:

#Source machine
#Typical speed is about 20 to 30MB/s
scp -r /data target_machine:/data

#Or you can enable the compression on the fly
#Depending on the type of your data, if your data is already compressed, you may see no or negative speed improvement
scp -rC /data target_machine:/data

rsync

rsync is similar to scp. It comes with the encryption (via SSH) such that the data is safe. It also allows you to transfer the newer files only. This will reduce the amount of data being transferred. However, it comes with few disadvantages: long decision time, encryption (which increase the size of overhead) and extra computational resource(e.g., data comparison, encryption and decryption etc). For example, if I use rsync to transfer 10TB of files from one machine to another machine (where the directory on the target machine is blank), it can easily take 5 hours to determine which files will need to be transferred before the actual data transfer is initialized.

#Run on the target machine
rsync -avzr -e ssh --delete-after source_machine:/data/ /data/

#Use a less secure encryption algorithm to speed up the process
rsync -avzr --rsh="ssh -c blowfish" --delete-after source_machine:/data/ /data/

#Use an even less secure algorithm to get the top speed
rsync -avzr --rsh="ssh -c arcfour" --delete-after source_machine:/data/ /data/

#By default, rsync compares the files using checksum, file size and modification date.
#Reduce the decision process by skipping the hash check
rsync -avzr --rsh="ssh -c arcfour" --delete-after --whole-file source_machine:/data/ /data/

Anyway, no matter what you do, the top speed of rsync in a consumer-grade gigabit network is around 45MB/s. On average, the speed is around 25-35MB/s. Keep in mind that this number does not include the decision time, which can be few hours.

rsyncd (rsync as a daemon)

Thanks for the comment of our reader. I got a chance to investigate the rsync as a daemon. Basically, the idea of running rsync as a daemon is similar to rsync. On the server, we run rsync as a service/daemon. We specify which directory we want to “export” to the clients (e.g., /usr/ports). When the files get changed on the server, it records the changes so that the when the clients talk to the server, the decision time will be faster. Here is how to set up rsync server on FreeBSD

sudo nano /usr/local/etc/rsyncd.conf

And this is my configuration file:

pid file = /var/run/rsyncd.pid

#Notice that I use derrick here instead of other systems users, such as nobody
#That's because nobody does not have permission to access the path, i.e., /data/
#Either you make the source directory available to "nobody", or you change the daemon user.
uid = derrick
gid = derrick
use chroot = no
max connections = 4
syslog facility = local5
pid file = /var/run/rsyncd.pid

[mydata]
   path = /data/
   comment = data
Don't forget to include the following in /etc/rc.conf, so that the service will be started automatically.

rsyncd_enable="YES"
#Let's start the rsync service:

sudo /usr/local/etc/rc.d/rsyncd start

To pull the files from the server to the clients, run the following:

rsync -av myserver::mydata /data/

#Or you can enable compression
rsync -avz myserver::mydata /data/

To my surprise, it works much better than running rsync alone. Here are some data I collected during transferring 10TB files from ZFS to ZFS:

Bandwidth measured on the client machine: 70MB/s

zpool IO speed on the client side: 75MB/s

P.S. Initially, the speed was about 45-60MB/s, after I tweak my Zpool, I can get the top speed to 75-80MB/s. Please check out here for references.

I notice that the decision time is much faster than running rsync alone. Also the process is much more stable, with zero interruption, i.e.,

rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at io.c(521) [receiver=3.1.0]
rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(632) [generator=3.1.0]
rsync: [receiver] write error: Broken pipe (32)

NetCat

NetCat is similar to cat, except that it works at the network level. I decide to use netcat for the initial transfer. If it is interrupted, I will let rsync to kick in the process. Netcat does not encrypt the data, so the overhead is very small. If you transfer the file within a local network and you don’t care about the security, netcat is a perfect choice.

There is only one disadvantage of using netcat. It can only handle one file at a time. It doesn’t mean you need to run netcat for every single file. Instead, we can tar the file before feeding to netcat, and untar the file at the receiving end. As long as we do not compress the files, we can keep the CPU usage small.

#Open two terminals, one for the source and another one for the target machine.

#On the target machine:
#Go to the directory, e.g., 
cd /data

#Run the following:
nc -l 9999| tar xvfp -

#On the source machine:
#Go to the directory, e.g.,
cd /data

#Pick a port number that is not being used, e.g., 9999
tar -cf - . | nc target_machine 9999

Unlike rsync, the process will start right the way, and the maximum speed is around 45 to 60MB/s in a gigabit network.

Conclusion

Candidates Top Speed (w/o compression) Top Speed (w/ compression) Resume Stability Instant Start?
scp 40MB/s 25MB/s No Low Instant
rsync 25MB/s 50MB/s Yes Medium Long Preparation
rsyncd 30MB/s 70MB/s Yes High Short Preparation
netcat 60MB/s (tar w/o -z) 40MB/s (tar w/ -z) No Very High Instant

 

Choice
Command
Pros
Cons
#1
scp
  • can be speed up choosing simple encryption
  • can recursively copy directories
#2
rsync
flexible and convenient for directory synchronization
possible, but not easy to configure to NOT to use encryption
#3
sftp
can be speed up choosing simple encryption
can’t recursively copy directories

Notice: when running any tool it consumes about 5-10% of CPU at both sender and receiver machines, apparently, doing encryption/decryption.

TEST DETAILS

These are the actual commands and generated output by these tools.

rsync

default “-a” –archive mode
“-z” compress
rsync -a –progress DIR2REPLICATE root@10.10.10.2:/tmp
   411533312   0%   45.38MB/s    0:26:04
  1407877120   1%   44.41MB/s    0:26:16
  1716748288   2%   42.09MB/s    0:27:36
  2002550784   2%   46.47MB/s    0:24:541
  2382397440   3%   45.31MB/s    0:25:24
  2762407936   3%   45.34MB/s    0:25:15
 rsync -az –progress DIR2REPLICATE root@10.10.10.2:/tmp
991383915 100%   13.67MB/s    0:01:09
   990955265 100%   14.02MB/s    0:01:07
   202624740 100%   15.42MB/s    0:00:12
   202771784 100%   15.87MB/s    0:00:12
    91676674 100%   12.86MB/s    0:00:06
    91628045 100%   11.76MB/s    0:00:07
  1082301721 100%   16.86MB/s    0:01:01
  1081744094 100%   17.14MB/s    0:01:00
   444531263 100%   13.06MB/s    0:00:32
   444311917 100%   12.97MB/s    0:00:32
    25956199 100%   11.99MB/s    0:00:02
    25387962 100%   16.94MB/s    0:00:01
    94059363 100%   15.51MB/s    0:00:05
    94189273 100%   14.61MB/s    0:00:06
   369550738 100%   16.31MB/s    0:00:21
   370924791 100%   15.96MB/s    0:00:22
   143659839 100%   14.75MB/s    0:00:09
   141681760 100%   14.58MB/s    0:00:09
    74662680 100%   14.45MB/s    0:00:04
    73882769 100%   12.73MB/s    0:00:05
     1809543 100%   13.59MB/s    0:00:00
###
### “-c arcfour” cipher is defined in RFC 4253; it is plain RC4 with a 128-bit key
###
rsync -a -P -e “ssh -T -c arcfour -o Compression=no -x” DIR2REPLICATE root@10.10.10.2:/tmp
  1081744094 100%   65.35MB/s    0:00:15
444531263 100%   56.34MB/s    0:00:07
444311917 100%   61.61MB/s    0:00:06
369550738 100%   53.94MB/s    0:00:06
370924791 100%   60.03MB/s    0:00:05
23319017231 100%   65.89MB/s    0:05:37
23308793162 100%   64.88MB/s    0:05:42
11951287020 100%   65.68MB/s    0:02:53
3453648896  28%   68.11MB/s    0:02:0

scp

default “-r” recursive
“-C” compress “-r” recursive
scp -r DIR2REPLICATE root@10.10.10.2:/tmp
100%  193MB  64.4MB/s   00:03
100%  424MB  60.6MB/s   00:07
100%  945MB  63.0MB/s   00:15
100%  945MB  59.1MB/s   00:16
100% 1032MB  64.5MB/s   00:16
100% 1032MB  60.7MB/s   00:17
100%  749MB  53.5MB/s   00:14
100% 1253MB  62.6MB/s   00:20
18% 4615MB  62.6MB/s   05:18
scp -Cr DIR2REPLICATE root@10.10.10.2:/tmp
100%  193MB  16.1MB/s   00:12
100%  424MB  14.6MB/s   00:29
100%  945MB  15.0MB/s   01:03
100%  945MB  14.8MB/s   01:04
100%  424MB  14.1MB/s   00:30
100%  352MB  17.6MB/s   00:20
100%  193MB  17.6MB/s   00:11
100%  135MB  16.9MB/s   00:08
100% 1032MB  17.8MB/s   00:58
100% 1032MB  17.8MB/s   00:58
100%  354MB  17.7MB/s   00:20
100%  749MB  18.3MB/s   00:41
100% 1253MB  18.4MB/s   01:08
  6% 1518MB  17.7MB/s   21:43
“-c arcfour” cipher is defined in RFC 4253; it is plain RC4 with a 128-bit key.
scp -c arcfour -r DIR2REPLICATE root@10.10.10.2:/tmp
100%  424MB 141.3MB/s   00:03
100%  945MB 135.0MB/s   00:07
100%  945MB 189.1MB/s   00:05
100%  424MB 141.2MB/s   00:03
100%  352MB 117.5MB/s   00:03
100% 1032MB 147.4MB/s   00:07
100% 1032MB 147.5MB/s   00:07
100%  749MB 149.8MB/s   00:05
100% 1253MB 156.6MB/s   00:08
100%   24GB 142.0MB/s   02:53
100%  595MB 119.1MB/s   00:05
100%   82GB 138.3MB/s   10:09
51% 9099MB 141.3MB/s   01:01
sftp
default behavior
“-R” to increase request queue length (default is 64)
“-B” to increase read/write request size (default is 32 KB)
sftp  root@10.10.10.2:/tmp
10% 2363MB  57.8MB/s   05:43 ETA
15% 3349MB  58.1MB/s   05:25 ETA
32% 7311MB  59.3MB/s   04:11 ETA
35% 7803MB  60.6MB/s   03:58 ETA
43% 9594MB  62.1MB/s   03:23 ETA
69%   15GB  58.6MB/s   01:55 ETA
77%   17GB  62.1MB/s   01:20 ETA
sftp  -R 128 -B 65536 root@10.10.10.2:/tmp
  2%  551MB  58.9MB/s   06:08 ETA
  8% 1806MB  62.3MB/s   05:28 ETA
41% 9170MB  60.6MB/s   03:35 ETA
56%   12GB  62.6MB/s   02:32 ETA
100%   22GB  62.5MB/s   05:56
“-c arcfour” cipher is defined in RFC 4253; it is plain RC4 with a 128-bit key.
sftp -oCiphers=arcfour root@10.10.10.2:/tmp
3%  711MB 142.5MB/s   02:31 ETA
18% 4115MB 146.0MB/s   02:04 ETA
23% 5156MB 148.1MB/s   01:55 ETA
28% 6379MB 144.6MB/s   01:49 ETA
34% 7672MB 144.0MB/s   01:41 ETA
37% 8389MB 143.7MB/s   01:36 ETA
62%   14GB 143.8MB/s   00:58 ETA
85%   19GB 142.4MB/s   00:22 ETA
92%   20GB 142.3MB/s   00:12 ETA
100%   22GB 144.4MB/s   02:34

TEST ENVIRONMENT

The test was performed between two servers interconnected by private 10 Gbit link with 9000 MTU “jumbo frame“. The files copies were large (100’s GB) binary files.

iperf network bandwidth test between 10.10.10.2 and 10.10.10.1
Network interface configuration (10 Gbit, MTU 9000)
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
————————————————————
[  4] local 10.10.10.2 port 5001 connected with 10.10.10.1 port 57279
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  11.5 GBytes  9.89 Gbits/sec
[  5]  0.0-30.0 sec  34.6 GBytes  9.89 Gbits/sec
[  4]  0.0- 0.9 sec  1000 MBytes  9.80 Gbits/sec
[  5]  0.0- 8.8 sec  9.77 GBytes  9.53 Gbits/sec
[  4]  0.0- 8.7 sec  10.0 GBytes  9.89 Gbits/sec
[  5]  0.0-86.8 sec   100 GBytes  9.89 Gbits/sec
[root@10.10.10.2]# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=”bond0″
BOOTPROTO=none
IPADDR=10.10.10.2
NETMASK=255.255.255.192
ONBOOT=”yes”
USERCTL=no
TXQUEUELEN=100000
MTU=8912
BONDING_OPTS=”mode=1 miimon=200 primary=eth0″

 

ALTERNATIVE METHODS TO MOVE DATA FAST

Copy directories via netcat: tar | nc. Renders speed  ~251 Mb/s ( = ~1 TB/hr).

### On receiver ###
nc -v -l 5555  | tar -xvf –

### On Sender: test2del – large directory to move ###
time tar -cvf – test2del | nc -v 10.100.100.2 5555

### Output calculated ###
11GB in  27.465 s = 293 MB/s
42GB in 2m51.513s = 249 Mb/s (~1 TB/hr)
42GB in 2m50.630s = 251 Mb/s (~1 TB/hr)

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>