Failing to build VirtualBox on Windows

In 2017, it’s still way too hard to build software on Windows. I wanted to see how difficult it’d be to make a few changes to VirtualBox, for which I’d have to rebuild it from source. The version of VirtualBox you download as a binary from isn’t the same one you get if you try to build from source, and I think this is where the problems start.

The first challenge is getting the source. I gave up after an hour of waiting for the source to download from As I was going to be working from Git repo, I used BitBucket’s Subversion import to create a repo at

The first thing you’ll notice about the VirtualBox build system is that the configure script for Windows is in VBScript, which is an unusual choice but it technically means you could get up and running on most Windows installs without too much effort. Unfortunately, this is where the easy part ends. I very quickly worked out that I wanted to use AppVeyor, mainly because their VM images are very well configured. I did try building on my own Server 2016 VM, but I couldn’t get the right version of the Windows SDK / DDK installed, or at least not in the paths that the VirtualBox build system expected them.

This leads me to the next reason why building VirtualBox on Windows absolutely sucks. It requires both an old version of MinGW and the Visual Studio 2010 compiler. Visual Studio 2010 was released on 12 April 2010, while MinGW 4.9.3 was released sometime in 2015, I think. SourceForge’s file browser is very hard to navigate.

I suspect choosing a 64 bit toolchain was my first mistake here, but we’ll get to that later.

I created an appveyor branch to work from, mainly so I could easily diff changes that I made to configure.vbs. When I started working on this, each build took about 20 minutes on AppVeyor, depending on where the build broke. You can see all the builds at, along with my terrible commit messages. Future Adam – please write better commit / build messages so that writing these blog posts will be easier. If anyone has any suggestions for technical note-taking, I’m all ears – but I think OneNote will be hard to beat.

AppVeyor’s build history shows 63-odd builds. I know I did a few more on AWS and various virtual machines, but I gave those up quickly – I think most of the issues were around not being able to find WinDDK even though it was installed.

Apparently it took me 4 builds even to get a sensible error message out of the configure script

Checking for MinGW32 GCC v3.3.x + Binutils + Runtime + W32API...
warning: Can't locate a suitable MinGW32 installation, ignoring since we're targeting AMD64 and won't need it.
Checking for MinGW-w64 GCC (unprefixed)...
error: Can't locate a suitable MinGW-w64 installation. Try specify the path with the --with-MinGW-w64= argument. If still no luck, consult the configure.log and the build requirements.

The hint there is that the error complains about a suitable MinGW installation. At that point it’s actually found some of the required files, but buried in the configure.log is the real reason it’s failing

trying: strPathMinGWw64=C:\tools\mingw64
Testing 'C:\tools\mingw64': lib64/libgcc_s.a not found
trying: strPathMinGWw64=C:\tools\mingw64
Testing 'C:\tools\mingw64': lib64/libgcc_s.a not found

I can’t remember whether at this point I realised that the version of MinGW I had installed was too new, but it definitely wasn’t the only problem. Skipping ahead a few builds shows I was copying files into the lib64 directory with xcopy, betting that the build system was looking in an old or obsolete path.

I don’t know why the configure script hides most of the useful information in configure.log – for example MinGW-w64 version '5.3.0' is not supported (or configure.vbs failed to parse it correctly).

Anyway, after dropping the MinGW version back I was able to progress past this point. I still needed to copy files from lib to lib64.

For the next little while it was as simple as adding dependencies in and rebuilding. The builds only took 5 or so minutes to fail so it wasn’t too bad, although it was very frustrating to wait and then find out I’d messed up a 7z command line, like in

Things get interesting when your software depends on OpenSSL on Windows. I’ve never actually built it from source myself and I’m afraid of the amount of whisky I’ll need when I eventually try. Security implications be damned, by 15 builds in I’d ‘cheated’ and downloaded some pre-built libraries from and included them in the build. Unfortunately this involved renaming the dlls to the ‘old’ OpenSSL names – apparently sometime in 2016 the OpenSSL project changed the names, breaking decades  of assumptions.

curl is another story. libcurl – despite being one of the most commonly used libraries anywhere (it’s probably in your phone, your car and maybe even your lightbulb), there were no precompiled binaries for Windows that I could find.

A slight diversion to work out how to build curl, then I guess. This means (in theory) linking to OpenSSL again, too. Luckily someone else had done it for me and I could lean on This has some pretty neat batch scripting in it and soon I had for myself. I should have really learned how to use GitHub Releases or BinTray but I think at this point this silly project had consumed enough of my evening and I threw a zip up on my NeoCities site and carried on.

20 builds later, I’d passed the trials of configure.vbs, and in theory I was ready to build VirtualBox.

Execute env.bat once before you start to build VBox:

This in itself proved a challenge because I couldn’t work out how to set the path to kmk so that AppVeyor’s build system would find it. I believe that AppVeyor is running everything in a single PowerShell instance by default and every command runs in a ‘child’ process so I couldn’t work out how to get the variables where they needed to be.

Sidenote: kmk is part of kBuild, which is not Kbuild, the Linux Kernel build system. Have a look at and then just fucking use cmake like everyone else. Anyone building on Windows will thank you for it.

A couple (9) more failed builds later I’d hacked the batch files enough to start actually building VirtualBox! Is almost 3 days to set up a build system some kind of record? I’m sure even a new Microsoft employee could kick off a Windows build faster than this. Come on.

I didn’t expect it to work first time but I definitely didn’t expect this error:

Config.kmk:2773: C:/projects/virtualbox-mirror/out/win.amd64/release/DynamicConfig.kmk: No such file or directory
Config.kmk:3503: *** You need to enable code signing for a hardened windows build to work.. Stop.
Command exited with code 2
I think on Linux there’s a `–disable-hardening` option for configure, but this didn’t exist for the Windows build – there’s another good reason to use a single build system for all builds. Looking at this commit you’d think it’s as easy as flipping a switch, but apparently not. I could not disable the code signing or hardening options no matter what I tried. The real solution, of course, is to disable the error message itself.
Onwards and… upwards? The next error to sort out was a missing python.exe, which turned out to be pretty boring – someone had coded it to only ever look in /bin/, which won’t help you much on Windows. With that fixed, it should be smooth sailing, right? Yeah, nah.
35 builds in, I got completely stuck. For some reason openssl.h wasn’t being found, even though it wasn’t a problem before. Luckily, AppVeyor let you RDP into a build machine during the build itself to inspect the state of the machine. This is pretty amazing – and I’m still on a free account!

By adding iex ((new-object net.webclient).DownloadString('')) into AppVeyor you’ll get an IP and credentials printed out in the build log. On a free account, I think the build logs are public so be careful with this.

From memory I was trying to use SysInternals Procmon (which every dev should learn to use) to work out where kBuild was expecting to find openssl.h. This turned out to be much too slow and created huge trace log files – 600mb+. I’m very glad I’m no longer on DSL based broadband. I think I tried a few different filters before giving up and looking for a way to run procmon non-interactively. Luckily that’s already been thought of, if you run
procmon /Quiet /Minimized /BackingFile virtualbox-build.pml you can start procmon automagically. I did have to ask for help for this one as it seemed to ‘block’ the build. AppVeyor support is awesome and came through with a fix, even opening a pull request on my repo (tl;dr start commands in PowerShell jobs and they won’t interrupt anything – similar to a bash subshell).

After fiddling with paths and learning about all the different hooks AppVeyor has (I needed to be able to retreive files from the build but the artifacts section normally isn’t run on a ‘failed’ build), I managed to get 628mb of 7zipped PML files off the build host. If you ever need to debug to this level, I highly recommend doing something like this. This was build number 44, clocking in at over 56 minutes, which is a bit of a problem as AppVeyor’s free plan has a 60 minute limit. Over the next little while I tried different paths for OpenSSL and talked to one of the VirtualBox developers on IRC.

This developer told me that internally (for the ‘commercial’ build of VirtualBox) they use a different build process which unfortunately couldn’t be shared due to licencing concerns. Sigh. At least a few of the things in the build system made a bit more sense (it looks like an internal checkout of the source contains most of the build tools).

It seems like the include path just got too long and the build system doesn’t see all of it, or something along those lines – the fix was to move openssl.h to virtualbox\include. After an evening and a half (?) I was off and moving again.

You should definitely include any references you’ve used in troubleshooting in code comments and commit messages. I don’t think I would have got much further without, which described the exact issue I was having at this point and a solution – copying a file with a specific name. I’m not sure whether I caused more issues here by mixing 32 and 64 bit libraries, but the build continued.

By build number 60, with a commit message of ‘Sigh’, I’d definitely hit the 60 minute limit of the free AppVeyor plan. Suprisingly, AppVeyor staff increased my time limit to 90 minutes when I asked. Thanks, Ilya!

At this point I was tracking down the cause of [00:16:37] kmk_builtin_redirect: _spawnvpe(/bin/nm.exe) failed: No such file or directory, which I probably should have recognised from the Python failure earlier – but which nm.exe did it want? Visual Studio? MinGW? 32 or 64 bit? It took multiple gigs of ProcMon logs to work this out, and I’m still not sure I chose the right one.

It was at build 62 I find myself completely defeated – VirtualBox appears to build some SSL certificates into a binary for some unknown reason and whatever was generating byte arrays generated bad code.

[00:19:41] TrustAnchorsAndCerts.cpp
[00:19:42] C:\projects\virtualbox-mirror\out\win.amd64\release\obj\SUPR3\TrustAnchorsAndCerts.cpp(496) : error C2070: 'const unsigned char []': illegal sizeof operand
[00:19:42] C:\projects\virtualbox-mirror\out\win.amd64\release\obj\SUPR3\TrustAnchorsAndCerts.cpp(540) : error C2070: 'const unsigned char []': illegal sizeof operand
[00:19:42] kmk: *** [C:/projects/virtualbox-mirror/out/win.amd64/release/obj/SUPR3/gen/TrustAnchorsAndCerts.obj] Error 2 (0x2)

I don’t know enough about C++ to fix this, and I don’t know enough about VirtualBox to fix whatever’s generating this. Maybe another day, with another bottle of Monkey Shoulder.

I only wanted to play with a modified VirtualBox Web Service, why is it this hard?

Failing to build an nginx addon

For a project I’m working on I needed to be able to rewrite the content of a page as it’s sent back to me from a remote server (via proxy_pass in nginx).

I’ve been using Alpine Linux more as part of very small Docker containers, so I started looking into how nginx is built there. After stumbling around a bit, I found that Alpine uses a ports-like system called aports. I copied the nginx port into a new folder and began working on a Dockerfile.

abuild, the Alpine package build tool seems to be designed to build the whole tree, and the wiki talks about an abuild build command that apparently does everything except build the package. Sigh. I eventually cobbled enough together from mailing list posts and guesses to figure out that the command I wanted was abuild -r -fF -c -P /packages

This is where it all went a bit wrong. The module I needed was, which I added into the APKBUILD script and faked up a sha256sum. abuild doesn’t seem to check this at all, but won’t let the build continue if it’s not listed.
No matter what I did, I got the following failure:

mv: can't rename '/nginx/pkg/nginx/usr/lib/nginx/modules/': No such file or directory
>>> ERROR: nginx-mod-http_subs_filter*: _module failed
>>> ERROR: nginx*: prepare_subpackages failed
>>> ERROR: nginx: all failed

Now, earlier in the build log I could see the module being built, so I couldn’t understand why this was going wrong.

After more stumbling around I found that this was an “old style” module – that couldn’t be built as a dynamic module like the rest of the ones in the script. I wonder why nginx’s build system doesn’t make more noise about this?

A quick patch from and I had it building as a dynamic module… except that it didn’t. There was still no .so file created, only an intermediate .o somewhere in the tree.

The really strange part of this is that if you try the same thing on a Debian or Ubuntu system, the module will build fine. At this point I’d wasted a lot of time trying to build on Alpine, so I switched my baseimage over to use debian:stretch-slim, installed nginx-full, which already has the module I need and continued with my project.

It’s not all bad though, I created a couple of good things while trying to get this to work. One is a debug version of abuild, which was as simple as changing #!/usr/bin/ash -e to #!/usr/bin/ash -xe. It’s up on docker as voltagex/alpine-builder-debug if you really need it.  This is helpful if you’re trying to build packages for Alpine and things aren’t working as you expect. Be warned though, this produces a lot of output.

Update: awilfox has a much better patch for abuild that is available at – this adds the traditional -v option to turn on verbose output. It does the same thing as my hack, but is cleaner.

Building nginx from source, very quickly

Just a quick note more for myself than anyone else. I needed to test a particular module for nginx but I didn’t have the build pre-reqs set up properly.

I’m slowly getting used to using Docker where I’d normally spin up additional VMs, so these are the commands:

docker run -it debian:latest /bin/bash
# apt update && apt install -y git build-essential
# cd /usr/src && git clone --depth=1
# git clone --depth=1
# This is the bit most of the tutorials miss! The configure script is not where you'd normally expect
# cd nginx
# ./auto/configure --without-http_rewrite_module --without-http_gzip_module --add-module=../ngx_http_substitutions_filter_module
# make -j

Note the –without lines make this build pretty useless – don’t use it for anything important!

Another oddity is that the output binaries are in ./obj

Turris Omnia: OpenWRT with warts

Just a quick post to catalogue some of the errata they haven’t posted about yet.

The DNS (configuration) is a lie

The DNS settings in LuCI are almost completely non-functional – with no indication, because the Omnia is using some combination of Unbound and Knot for DNSSEC. This means I’m currently getting less features out of the resolver on my router than I was running dnsmasq on FreeBSD.


Some solutions are listed at, but many would involve me having to re-configure IPv6, which is a pain.


Accidentally removing WiFi support from a Moto X Style


Oops. One should not drink and adb. This is the result of flashing a newer system image on a Motorola X Style (XT1572) without wiping the partition first. The system booted and appeared to have been upgraded from 6.0.0 to 6.0.1, but WiFi no longer functioned.

Digging a little further this appears to be some kind of mismatch between the WiFi driver and the firmware on the system partition. Oops again.


Luckily I was still able to back up my files to the SD card (tell your manufacturer you want expandable storage!) and reflashing a “stock” firmware image also restored my ability to use WiFi.

Stock is in quotes because annoyingly, Motorola don’t provide factory images for the Style, only the Pure – which in theory should work but I really don’t want to do something serious like mess up EFS or the radio firmware.

Therefore, I’ll have to hope that the firmware at is unmodified and genuine.

One more sidenote: Flashing a factory image and relocking the bootloader doesn’t remove the annoying “bootloader was unlocked” warning when turning on the phone.

Testing simple ZFS failure on FreeBSD

ZFS is one of the most robust and fault tolerant filesystems around. It can also be fairly tricky to set up and maintain so I’m writing at least one blog post based on my experiences setting up my NAS. I definitely did make some mistakes when I configured my storage but it’s served me well for a while now. Unfortunately, other parts of the motherboard I selected have been a bit temperamental. See for details. By writing this I’m hoping I feel a bit more confident with my upcoming drive replacement – one of the issues I have is only 4 storage drives, so increasing the total storage available means replacing all of the drives, one by one.

An aside: VirtualBox is frustrating as hell

While writing this blog post, I added and removed a fair few drives, as well as scrapping the VM configuration completely and starting again. For the benefit of future searchers, if you hit an error like:

VirtualBox error message

UUID {c56e7d80-8abc-44db-a27c-669a8c98162c} of the medium 'F:\FreeBSD Test\rootfs.vdi' does not match the value {bad50d40-beeb-40a4-9e03-3a98bb85fe5e} stored in the media registry ('C:\Users\live\.VirtualBox\VirtualBox.xml').

when starting your virtual machine, have a read of Most of the rest of the thread is irrelevant.

The “bad” UUID wasn’t actually stored in the path shown, but in the .vbox file – the definition of the VM. Close VirtualBox’s interface, update the UUID and away we go again. Of course, if you’ve got important data in your VM, you may want to clone and reattach instead of editing files or modifying the drive UUID itself.

Virtual Machine Setup

I’m using VirtualBox 5.1.8 on a Windows 10 host. Newer versions should be fine, and the host OS shouldn’t matter.
To install FreeBSD, I’m using FreeBSD-11.0-RELEASE-amd64-disc1.iso, although most other recent releases should work.

Create a VM that looks something like this, but don’t add any disks yet
One of the initial VM creation screens

To speed things up, I’m creating disks using the vbox-img command instead of via the GUI

PS F:\FreeBSD Test> vbox-img.exe createbase --filename rootfs.vdi --size 10240000000
PS F:\FreeBSD Test> vbox-img.exe createbase --filename disk1.vdi --size 10240000000
PS F:\FreeBSD Test> vbox-img.exe createbase --filename disk2.vdi --size 10240000000
PS F:\FreeBSD Test> vbox-img.exe createbase --filename disk3.vdi --size 10240000000
PS F:\FreeBSD Test> vbox-img.exe createbase --filename disk4.vdi --size 10240000000
PS F:\FreeBSD Test> vbox-img.exe createbase --filename disk5.vdi --size 10240000000

You will actually need the full 50GB free by the end of this post.

Disk setup in VirtualBox

Set up the disks so they look like this in VirtualBox, with hotplugging enabled for every drive except the root FS.
The only other thing I needed to change was adding an IDE controller as VirtualBox didn’t want to boot from a SATA attached CD drive for unknown reasons

Adding an IDE controller so VirtualBox will boot

For posterity, here’s the other VM settings I used, although I’m not sure if they’re needed.



FreeBSD Setup

I won’t go through installing FreeBSD here, except for the following:
* Install the system to the first drive shown, ada0. The defaults should be fine
* Install an SSH server
* You may want to change to Bridged Networking in VirtualBox – I couldn’t SSH in using the VirtualBox network
* Log in as root on first boot and add a user through the console as FreeBSD sets PermitRootLogin no for sshd by default
* I install sudo and nano to make my life easier

At this point you should be able to sudo su to root, and see the following setup

root@vm-freebsd:/usr/home/voltagex # camcontrol devlist
<VBOX CD-ROM 1.0>                  at scbus0 target 0 lun 0 (cd0,pass0)
<VBOX HARDDISK 1.0>                at scbus2 target 0 lun 0 (ada0,pass1)
<VBOX HARDDISK 1.0>                at scbus3 target 0 lun 0 (ada1,pass2)
<VBOX HARDDISK 1.0>                at scbus4 target 0 lun 0 (ada2,pass3)
<VBOX HARDDISK 1.0>                at scbus5 target 0 lun 0 (ada3,pass4)
<VBOX HARDDISK 1.0>                at scbus6 target 0 lun 0 (ada4,pass5)

I have based my ZFS configuration on – my NAS only has 4+1 disk bays, so I’ve matched that configuration here. Apparently it’s better to have 6 devices, but I can’t yet bend space and time, nor afford to import another SuperMicro chassis to Australia.

For this test, we’re going to create two mirrors and ignore the cache.

root@vm-freebsd::~ # zpool create tank mirror ada1 ada2 mirror ada3 ada4

 root@vm-freebsd:~ # zpool status
  pool: tank
  state: ONLINE
  scan: none requested

    tank        ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        ada1    ONLINE       0     0     0
        ada2    ONLINE       0     0     0
      mirror-1  ONLINE       0     0     0
        ada3    ONLINE       0     0     0
        ada4    ONLINE       0     0     0

errors: No known data errors
root@vm-freebsd:~ # zfs list
tank    70K  18.4G    19K  /tank

Good enough for our testing purposes

Now, create a dataset.

root@vm-freebsd:~ # zfs create tank/data
root@vm-freebsd:~ # cd /tank/data/

Let’s create a 4GB file of random data to represent the important data installed on our NAS – the hardest part of this was working out what syntax to use for arithmetic. This is csh, we’re not in ~~Kansas~~ bash any more.

root@vm-freebsd:/tank/data # dd if=/dev/random of=file1.dat bs=`expr 1024 * 1024` count=4000
4000+0 records in
4000+0 records out
4194304000 bytes transferred in 93.065192 secs (45068451 bytes/sec)

After all of that, it's time to get to the point of this article – what happens when things go wrong?

In the virtual machine settings, go to Storage and remove the fourth disk. Annoyingly you can remove a disk (because we turned on hot-plugging) but not add one while the VM is turned on.

Most server/NAS boards are going to support hot plugging, even if the idea of ripping out a spinning drive worries me a little bit.

Now that we've removed a drive, what does our pool look like?

root@vm-freebsd:/tank/data # zpool status
  pool: tank
 state: DEGRADED
status: One or more devices has been removed by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
  scan: none requested

NAME                      STATE     READ WRITE CKSUM
    tank                      DEGRADED     0     0     0
      mirror-0                ONLINE       0     0     0
        ada1                  ONLINE       0     0     0
        ada2                  ONLINE       0     0     0
      mirror-1                DEGRADED     0     0     0
        ada3                  ONLINE       0     0     0
        12153649206614195490  REMOVED      0     0     0  was /dev/ada4

errors: No known data errors

Oh no, drive failure! Lucky we have smartmontools running and smtpd set up to email a real email address*, so I was able to catch the problem (that I just caused)

  • I wonder how many users actually have this set up. I still log on and check mutt when I remember.

If you “plug” the disk back in via the same Storage settings, you’ll need to run zpool online tank ada4 and you’ll be able to see

root@vm-freebsd:/tank/data # zpool status
  pool: tank
 state: ONLINE
  scan: resilvered 19.5K in 0h0m with 0 errors on Sun Nov  6 22:54:01 2016

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada1    ONLINE       0     0     0
            ada2    ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            ada3    ONLINE       0     0     0
            ada4    ONLINE       0     0     0

In a real server, it’d take a lot longer to “resilver” and replace the disk in the array. In this configuration, if you lose a second disk while the resilver is happening, you lose everything.

Let’s try a slightly more realistic situation. If a drive has failed, you’d plug in a new one (disk5.vdi in my example) and replace old with new. I thought FreeBSD would assign a new device name but it took the old ada4, leading to this slightly confusing command

<br />root@vm-freebsd:/tank/data # zpool status
  pool: tank
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sun Nov  6 23:00:53 2016
        2.23G scanned out of 3.91G at 127M/s, 0h0m to go
        1.11G resilvered, 57.16% done

        NAME                        STATE     READ WRITE CKSUM
        tank                        DEGRADED     0     0     0
          mirror-0                  ONLINE       0     0     0
            ada1                    ONLINE       0     0     0
            ada2                    ONLINE       0     0     0
          mirror-1                  DEGRADED     0     0     0
            ada3                    ONLINE       0     0     0
            replacing-1             OFFLINE      0     0     0
              12153649206614195490  OFFLINE      0     0     0  was /dev/ada4/old
              ada4                  ONLINE       0     0     0  (resilvering)

To make the resilver take a bit longer for my next test, so I’ve created a couple more 4GB files. Then I’m going to cause a kernel panic.

root@vm-freebsd:/tank/data # sysctl debug.kdb.panic=1

Yes, if you run this as root it will actually take your system down. Do(n’t) try this at home.

After rebooting and logging in again via SSH, has my pool survived? Aside from one slightly scary message when I first ran zpool status (1 byte a second?!), it seems to be running fine.

root@vm-freebsd:~ # zpool status
  pool: tank
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sun Nov  6 23:13:54 2016
        1.60G scanned out of 11.7G at 1/s, (scan is slow, no estimated time)
        818M resilvered, 13.64% done

        NAME                       STATE     READ WRITE CKSUM
        tank                       DEGRADED     0     0     0
          mirror-0                 ONLINE       0     0     0
            ada1                   ONLINE       0     0     0
            ada2                   ONLINE       0     0     0
          mirror-1                 DEGRADED     0     0     0
            ada3                   ONLINE       0     0     0
            replacing-1            DEGRADED     0     0     0
              6482039156816553123  OFFLINE      0     0     0  was /dev/ada4/old
              ada4                 ONLINE       0     0     0

errors: No known data errors
root@vm-freebsd:/usr/home/voltagex # zpool status
  pool: tank
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sun Nov  6 23:13:54 2016
        2.70G scanned out of 11.7G at 93.5M/s, 0h1m to go
        1.34G resilvered, 22.99% done

        NAME                       STATE     READ WRITE CKSUM
        tank                       DEGRADED     0     0     0
          mirror-0                 ONLINE       0     0     0
            ada1                   ONLINE       0     0     0
            ada2                   ONLINE       0     0     0
          mirror-1                 DEGRADED     0     0     0
            ada3                   ONLINE       0     0     0
            replacing-1            DEGRADED     0     0     0
              6482039156816553123  OFFLINE      0     0     0  was /dev/ada4/old
              ada4                 ONLINE       0     0     0  (resilvering)

Okay, what about my data?

root@vm-freebsd:~ # ls /tank/data
ls: /tank/data: No such file or directory


root@vm-freebsd:~ # zfs mount -a
root@vm-freebsd:~ # ls /tank/data/
file1.dat file2.dat file3.dat

Phew, looks good.

I’d like to find out if there’s any other failure modes I should know about (could I “remove” the cache from my NAS by re-purposing it as the boot drive?) but at least I know I’m not going to lose my data if my system decides to reboot halfway through resilvering, and that removing and replacing a drive isn’t really that scary.

Benchmarking compression on FreeBSD

After I commented on HackerNews that I was having performance issues with compressed ZFS, I started looking into benchmarking the performance of the Atom C2758 CPU that’s in my home NAS.

I found the fsbench project via a reply to my comment and hit my first challenge. The code I needed to run on my FreeBSD system was hosted in a VCS called Fossil.

[voltagex@beastie ~/src]$ sudo pkg install fossil
Updating FreeBSD repository catalogue...
Fetching meta.txz: 100% 944 B 0.9kB/s 00:01
Fetching packagesite.txz: 100% 6 MiB 824.2kB/s 00:07
Processing entries: 100%
FreeBSD repository update completed. 25371 packages processed.
The following 1 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
fossil: 1.35_1,2

Number of packages to be installed: 1

The process will require 3 MiB more space.
1 MiB to be downloaded.

Proceed with this action? [y/N]: y
Fetching fossil-1.35_1,2.txz: 100% 1 MiB 371.8kB/s 00:03
Checking integrity... done (0 conflicting)
[1/1] Installing fossil-1.35_1,2...
[1/1] Extracting fossil-1.35_1,2: 100%
Message from fossil-1.35_1,2:
After each upgrade do not forget to run the following command:

fossil all rebuild

[voltagex@beastie ~/src]$ fossil clone
[voltagex@beastie ~/src]$ fossil clone fsbench
#nothing happens for a while
Round-trips: 2 Artifacts sent: 0 received: 3383
Clone done, sent: 564 received: 6760744 ip:
Rebuilding repository meta-data...
100.0% complete...
Extra delta compression...
Vacuuming the database...
project-id: c2da1bb713a9809141b9c12d0019c6d73c005f0b
server-id: bd54879d0e061500b44f7ece394c4accab5fc7dd
admin-user: voltagex (password is &quot;3b518f&quot;)

Well okay, apparently I'm an admin now. All I really wanted to do was get the code.

[voltagex@beastie ~/src]$ cd fsbench
-bash: cd: fsbench: Not a directory

Wat? (no IDs or anchors so I can’t link directly to the section) says that I need to run fossil open on the file that was created.

[voltagex@beastie ~/src]$ file fsbench
fsbench: Fossil repository - SQLite3 database

Oh hey, that's kinda neat.

fossil open fsbench
#snipped file listing
project-name: fsbench
repository: /usr/home/voltagex/src/fsbench
local-root: /usr/home/voltagex/src/
config-db: /home/voltagex/.fossil
project-code: c2da1bb713a9809141b9c12d0019c6d73c005f0b
checkout: bf701e2f58e59f50dc8bfacb4e7ba916b43931fc 2015-07-11 07:55:41 UTC
parent: aa43cf887068b73178d3fcaa432f17a9d3585e63 2015-04-04 18:07:33 UTC
tags: trunk
comment: Clarifications in Results.ods (user: m)
check-ins: 107

Bugger. Files all over my src/ path. fossil close doesn’t clean up the mess it made, either.

[voltagex@beastie ~/src]$ mkdir fsbench &amp;&amp; cd fsbench &amp;&amp; fossil open ../fsbench
mkdir: fsbench: File exists
mkdir fsbench-checkout &amp;&amp; cd fsbench-checkout &amp;&amp; fossil open ../fsbench
[voltagex@beastie ~/src/fsbench-checkout/]$ cd src
[voltagex@beastie ~/src/fsbench-checkout/src]$ cmake -DCMAKE_BUILD_TYPE=Release .
-bash: cmake: command not found

Damnit. I’ll be right back. (FreeBSD really really needs an Australian package mirror)

A short time and a few package installs later, I have a working fsbench executable

Now, let’s get some test data

[voltagex@beastie ~/src/fsbench-checkout/src]$ curl | xz -d - &gt; linux-4.7.2.tar
[voltagex@beastie ~/src/fsbench-checkout/src]$ ls -lah linux-4.7.2.tar
-rw-r--r-- 1 voltagex voltagex 646M Aug 26 21:33 linux-4.7.2.tar

[voltagex@beastie ~/src/fsbench-checkout/src]$ ./fsbench LZ4 linux-4.7.2.tar
Codec version args
C.Size (C.Ratio) E.Speed D.Speed E.Eff. D.Eff.
LZ4 r127
231195303 (x 2.928) 173 MB/s 750 MB/s 114e6 493e6
Codec version args
C.Size (C.Ratio) E.Speed D.Speed E.Eff. D.Eff.
done... (4*X*1) iteration(s)).

173 MB/s – that doesn't quite match the 10 megabytes a second I was getting transferring data to that dataset a couple of weeks ago.

Here's where things get a bit more interesting (for me, at least). The data I was trying to move to the NAS a little while ago was a set of tiles from a LiDAR survey done in nearby Canberra. During <em>GovHack 2016 </em>I'd investigated the LAZ format, which took a 135GB dataset down to a more managable 18.7GB. When I copied the files to my NAS, it's the compressed form that I'd used. From this it seems like FreeBSD's LZ4 code will try very hard to recompress already-compressed data, whereas fsbench doesn't do this (LAZ is compressed, LAS isn't):

[voltagex@beastie /BoxODisks/paperweight/Data/Compressible/ACT_8ppm_Final_Delivery]$ ~/src/fsbench-checkout/src/fsbench LZ4 ACT2015-C3-ORT_6966100_55_0002_0002.las
Codec version args
C.Size (C.Ratio) E.Speed D.Speed E.Eff. D.Eff.
LZ4 r127
873970150 (x 1.526) 89.3 MB/s 710 MB/s 30e6 244e6

[voltagex@beastie /BoxODisks/paperweight/Data/Compressible/ACT_8ppm_Final_Delivery]$ ~/src/fsbench-checkout/src/fsbench LZ4 Compressed/ACT2015-C3-ORT_6966100_55_0002_0002.laz
Codec version args
C.Size (C.Ratio) E.Speed D.Speed E.Eff. D.Eff.
LZ4 r127
145529952 (x 1.000) 10.7 GB/s - 0e0 0e0
Codec version args
C.Size (C.Ratio) E.Speed D.Speed E.Eff. D.Eff.
done... (4*X*1) iteration(s)).
[voltagex@beastie /BoxODisks/paperweight/Data/Compressible/ACT_8ppm_Final_Delivery]$

So yes, there’s definitely a difference in compression speeds there but it looks like the little Atom C2758 is capable enough to cope with a few compressed datasets.

I don’t think it’s a Samba issue either. Windows is reporting well over 100 megabytes per second copying over the network to an SSD. I’m having difficulty reproducing the original problem, possibly because of caching on the ZFS pool.