Benchmarking compression on FreeBSD

After I commented on HackerNews that I was having performance issues with compressed ZFS, I started looking into benchmarking the performance of the Atom C2758 CPU that’s in my home NAS.

I found the fsbench project via a reply to my comment and hit my first challenge. The code I needed to run on my FreeBSD system was hosted in a VCS called Fossil.

[voltagex@beastie ~/src]$ sudo pkg install fossil
Password:
Updating FreeBSD repository catalogue...
Fetching meta.txz: 100% 944 B 0.9kB/s 00:01
Fetching packagesite.txz: 100% 6 MiB 824.2kB/s 00:07
Processing entries: 100%
FreeBSD repository update completed. 25371 packages processed.
The following 1 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
fossil: 1.35_1,2

Number of packages to be installed: 1

The process will require 3 MiB more space.
1 MiB to be downloaded.

Proceed with this action? [y/N]: y
Fetching fossil-1.35_1,2.txz: 100% 1 MiB 371.8kB/s 00:03
Checking integrity... done (0 conflicting)
[1/1] Installing fossil-1.35_1,2...
[1/1] Extracting fossil-1.35_1,2: 100%
Message from fossil-1.35_1,2:
After each upgrade do not forget to run the following command:

fossil all rebuild

[voltagex@beastie ~/src]$ fossil clone https://chiselapp.com/user/Justin_be_my_guide/repository/fsbench
Usage: fossil clone ?OPTIONS? FILE-OR-URL NEW-REPOSITORY
[voltagex@beastie ~/src]$ fossil clone https://chiselapp.com/user/Justin_be_my_guide/repository/fsbench fsbench
#nothing happens for a while
Round-trips: 2 Artifacts sent: 0 received: 3383
Clone done, sent: 564 received: 6760744 ip: 216.250.117.7
Rebuilding repository meta-data...
100.0% complete...
Extra delta compression...
Vacuuming the database...
project-id: c2da1bb713a9809141b9c12d0019c6d73c005f0b
server-id: bd54879d0e061500b44f7ece394c4accab5fc7dd
admin-user: voltagex (password is "3b518f")

Well okay, apparently I'm an admin now. All I really wanted to do was get the code.

[voltagex@beastie ~/src]$ cd fsbench
-bash: cd: fsbench: Not a directory

Wat?

https://www.fossil-scm.org/xfer/doc/tip/www/quickstart.wiki (no IDs or anchors so I can’t link directly to the section) says that I need to run fossil open on the file that was created.

[voltagex@beastie ~/src]$ file fsbench
fsbench: Fossil repository - SQLite3 database

Oh hey, that's kinda neat.

fossil open fsbench
#snipped file listing
project-name: fsbench
repository: /usr/home/voltagex/src/fsbench
local-root: /usr/home/voltagex/src/
config-db: /home/voltagex/.fossil
project-code: c2da1bb713a9809141b9c12d0019c6d73c005f0b
checkout: bf701e2f58e59f50dc8bfacb4e7ba916b43931fc 2015-07-11 07:55:41 UTC
parent: aa43cf887068b73178d3fcaa432f17a9d3585e63 2015-04-04 18:07:33 UTC
tags: trunk
comment: Clarifications in Results.ods (user: m)
check-ins: 107

Bugger. Files all over my src/ path. fossil close doesn’t clean up the mess it made, either.

[voltagex@beastie ~/src]$ mkdir fsbench && cd fsbench && fossil open ../fsbench
mkdir: fsbench: File exists
mkdir fsbench-checkout && cd fsbench-checkout && fossil open ../fsbench
[voltagex@beastie ~/src/fsbench-checkout/]$ cd src
[voltagex@beastie ~/src/fsbench-checkout/src]$ cmake -DCMAKE_BUILD_TYPE=Release .
-bash: cmake: command not found

Damnit. I’ll be right back. (FreeBSD really really needs an Australian package mirror)

A short time and a few package installs later, I have a working fsbench executable

Now, let’s get some test data

[voltagex@beastie ~/src/fsbench-checkout/src]$ curl https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.7.2.tar.xz | xz -d - > linux-4.7.2.tar
[voltagex@beastie ~/src/fsbench-checkout/src]$ ls -lah linux-4.7.2.tar
-rw-r--r-- 1 voltagex voltagex 646M Aug 26 21:33 linux-4.7.2.tar

[voltagex@beastie ~/src/fsbench-checkout/src]$ ./fsbench LZ4 linux-4.7.2.tar
Codec version args
C.Size (C.Ratio) E.Speed D.Speed E.Eff. D.Eff.
LZ4 r127
231195303 (x 2.928) 173 MB/s 750 MB/s 114e6 493e6
Codec version args
C.Size (C.Ratio) E.Speed D.Speed E.Eff. D.Eff.
done... (4*X*1) iteration(s)).

173 MB/s – that doesn't quite match the 10 megabytes a second I was getting transferring data to that dataset a couple of weeks ago.

Here's where things get a bit more interesting (for me, at least). The data I was trying to move to the NAS a little while ago was a set of tiles from a LiDAR survey done in nearby Canberra. During <em>GovHack 2016 </em>I'd investigated the LAZ format, which took a 135GB dataset down to a more managable 18.7GB. When I copied the files to my NAS, it's the compressed form that I'd used. From this it seems like FreeBSD's LZ4 code will try very hard to recompress already-compressed data, whereas fsbench doesn't do this (LAZ is compressed, LAS isn't):

[voltagex@beastie /BoxODisks/paperweight/Data/Compressible/ACT_8ppm_Final_Delivery]$ ~/src/fsbench-checkout/src/fsbench LZ4 ACT2015-C3-ORT_6966100_55_0002_0002.las
Codec version args
C.Size (C.Ratio) E.Speed D.Speed E.Eff. D.Eff.
LZ4 r127
873970150 (x 1.526) 89.3 MB/s 710 MB/s 30e6 244e6

[voltagex@beastie /BoxODisks/paperweight/Data/Compressible/ACT_8ppm_Final_Delivery]$ ~/src/fsbench-checkout/src/fsbench LZ4 Compressed/ACT2015-C3-ORT_6966100_55_0002_0002.laz
Codec version args
C.Size (C.Ratio) E.Speed D.Speed E.Eff. D.Eff.
LZ4 r127
145529952 (x 1.000) 10.7 GB/s - 0e0 0e0
Codec version args
C.Size (C.Ratio) E.Speed D.Speed E.Eff. D.Eff.
done... (4*X*1) iteration(s)).
[voltagex@beastie /BoxODisks/paperweight/Data/Compressible/ACT_8ppm_Final_Delivery]$

So yes, there’s definitely a difference in compression speeds there but it looks like the little Atom C2758 is capable enough to cope with a few compressed datasets.

I don’t think it’s a Samba issue either. Windows is reporting well over 100 megabytes per second copying over the network to an SSD. I’m having difficulty reproducing the original problem, possibly because of caching on the ZFS pool.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s