FreeNAS 9.3 on the SuperMicro 5028A-TN4 / A1SRi-2758F

Warning: work in progress article

If you have this board you probably have IPMI. Use the “Virtual Storage” option to mount the FreeNAS ISO:


You may have to screw around in the BIOS (actually UEFI, but they do a great job of emulating the terrible UX from 20 years ago) to get the “ATEN Virtual CDROM” to show up as a boot option. You may also be able to boot the installer from one of the rear USB ports but this was an exercise in frustration.

Once you get into the FreeNAS installer, drop to a shell and run

kldload xhci.ko

You should get output similar to what’s shown below.


Type exit to get back to the installer and continue onwards. When you hit install, you should get to choose your USB3 device – it’s probably right down the bottom so scroll down.


Here’s where is unfortunately wrong. I’m not sure whether adding

set kFreeBSD.xhci_load=YES

to GRUB’s options ever worked, but it certainly didn’t help me. So I tried something else.


At the FreeNAS/GRUB boot menu, hit ‘e’ to edit the FreeNAS entry, then add

kfreebsd_module_elf /ROOT/default/@/boot/kernel/xhci.ko

. At the moment I’m not sure whether position is important, but I added it after the line that loads ispfw.ko.

Unfortunately until we can get to grub.cfg or loader.conf we’ll have to re-enter that line every time we want to boot from USB3.

2015, the year of the Never Ending Open Source To Do List

This year was the year I took on way too much. It’s so easy to hit Fork on a GitHub repository, or to raise an issue, to try to help out on someone else’s project but eventually it all catches up with you.

I feel most guilty about bailing on WheelMapespecially as building accessibility is something that affects me personally. I was supposed to help implement a new search API but I got distracted building a Dockerfile to make my development setup easier. This eventually ended up sapping my enthusiasm for the project, then (paid) work got crazy and I had to step away.

I feel like I have open-source-ADD. There are so many cool projects to help out on, so many bugs that could be fixed if I could just spend 30 minutes on them… then another 30 minutes, and another until it’s Sunday at 3am and I really should sleep, lest my nighttime exploits affect my ability to pay for my Internet connection.

I didn’t really have a term for the way I was approaching open source / volunteer work until I replied to someone on Hacker News. The Never Ending Open Source To Do List. It’s like making the minimum repayments on a credit card while continuing to buy every game in a Steam Sale. It’s not going to end well – eventually you have to start paying interest.

in an article titled A Lot Happens, Jesse writes

You can’t be emotionally all in on everything. You can’t make another 24 hours appear to be “present” for everything. Instead, I stole time and ran my emotional credit card like it was limitless.

I guess reading about what Jesse went through stuck with me. I am NOT comparing our two experiences, but I can definitely relate to some of the patterns of behaviour (and I feel a bit guilty because I know I’ve asked favours of Jesse in the past).

You should go and read that one, and the followup. This article will still be here when you get back. It should serve as a warning for everyone who has stayed up late at night, somewhere near the Ballmer Peak, trying to get one more patch in or one more mailing-list discussion sorted.

In my perfect world, someone would pay for me to be Open Source Batman, swooping in on every call for contributors to help get projects going again. I mean, I was able to help PureDarwin by sending a few well-placed emails, wasn’t I?

In reality, I’m trading in my free time for a bit of an ego boost when issues get closed or patches get merged. My free time is a limited resource and sometimes it’d probably be better used getting fit or learning TypeScript.

I’m lucky I’ve had a bit of spare cash this year so I’ve been able to just spin up another instance to run another build for whatever project I’m working on this second, without putting too much thought into it. Unfortunately, the Australian dollar took a dive this year and most of the services I rely on are billed in US dollars. Renewals for things like my password manager, my VPS and my development tools came up and I found myself facing a bit of a shortfall.

Next year I’ll look into setting up a Patreon or something to cover the cost of the AWS instances I’ll be using for a lot of development as I’ve just turned off my DigitalOcean instance due to the falling Australian dollar.

But first I’ve got to do something worthy of being paid for – thus the cycle of the Never Ending Open Source To Do List starts again.

I hope that whatever job I find next year (hire me!) allows me to contribute to open source. I really do derive a lot of happiness from contributing to open source, but it’s a delicate balance between extracting a bit of a buzz from hitting the Fork button and getting enough sleep.

I don’t claim to have all the answers, and you should go read Ashe Dryden’s post on OSS and unpaid labor if you are also struggling with your open source contributions.

Here’s to knocking off a few more items on the List in 2016.

Malvertising, on my StackOverflow? It’s more likely than you think

Context / Link ads on StackOverflow

I’m pretty sure the capitalised, pop up generating links in the screenshot above aren’t covered in the StackExchange-backed AcceptableAds initiative.

Now, apart from snarky opening lines, I’m not blaming StackOverflow for this. After panicking that my newly flashed Nexus 5 was infected with some kind of malware, I fired up tPacketCapture to check what exactly was going on. Unfortunately, this uses the Android VPN API to capture traffic without root, and Android thinks it’s a good idea to send DNS over the unencrypted connection by default – so I couldn’t check for DNS poisoning or anything like that.

I’ve dropped the PCAP into Fiddler4 for ease of viewing. Can you see anything wrong here?

Dodgy traffic listing

As far as I can tell, a sketchy advertiser is on either ScorecardResearch or QuantServ. The wonderfully named hatredsmotorcyclist is serving some kind of obfuscated JavaScript related to DNSUnlocker which is a known malware provider, but not normally on Android. In a desktop browser, that javascript generates a whole lot of fake virus scanner popups which are sure to completely screw up your PC. I should probably run them in a VM at some point.

I can’t reproduce the link ads from the first screenshot, but I’ve posted a beautified version of the DNSUnlocker javascript as a Gist. I don’t recommend running it. I did – in the console of Chrome on GitHub, which means I’m partially protected by the Content Security Policies that GitHub sets.

You can see what the first script tried to load in the second file of that gist, but I’m sorry I couldn’t format it very well. You can see something called “re-markit” which I’m going to guess did the “marking” in the first screenshot.

Whether I was DNS poisoned or not I won’t know, but this is all the more reason to run an ad-blocker, and be very careful about what you let through – an ad network that’s benign today could be serving the latest 0-day tomorrow. I’m lucky that all I got was some crappy ads and a Play Store redirect.

A cleaned-up version of the Fiddler session archive is hosted here – malvertising.saz (sorry, I had to rename it for WordPress to allow the upload). I’d appreciate any help doing a more detailed analysis and reporting those strange domains to the registrars and the hosters. Oh, and if you see something like this on your Android, take a packet capture, then clear your browser cache. If you’re rooted, you may want to install AdBlock from the F-Droid store.

Like what I do? Support me here

Need a drink? Try a Whiskey Sweet n’ Sour from The Cinnamon Scrolls

Reimplementing apt-file, badly

I recently thought of a idea to write an apt-file (yum whatprovides) that works across multiple distros. If you’ve never used a utility like this, they’ll tell you which package provides a certain file. This would help when writing README files to work out what dependencies are needed in each distro.

After a couple of hours of fiddling around (read: procrastinating), I had a working import from the repository Contents.gz file that’s on every Debian mirror. The thing that struck me was the file sizes:

File Size
Contents-amd64.gz 26,721kb
Contents-amd64 378,971kb
Contents-amd64.sqlite (naive import with lots of duplication) 452,146kb
Contents-amd64.sqlite (de-normalised) 330,274kb
Contents-amd64.sqlite.gz* 55,858kb

Then I got side tracked…

Contents-amd64.gz is what’s downloaded when you run apt-file update. First off I basically ran a for each line in file, split it into package name and file name, then inserted it straight into SQLite.

I then remembered some introductory database theory and ‘de-normalised’ the data – each package becomes an integer and then each row only stores a reference to that integer.

The downside to this is that I needed to run two imports. Once to get the list of packages and files, and then again to link the packages to the list of files. In SQLite, this means creating copies of tables, dropping the duplicate one and then running VACUUM to reclaim the space. The result of this is that I (barely) beat the original file for storage efficiency.

Next, what about search? Could I really beat grep?

I think caching and load affected some of the results – a couple of runs a few hours apart produced wildly different results. All of my tests were run on a 512mb DigitalOcean instance.

$ time zgrep bash /var/cache/apt/apt-file/mirrors.digitalocean.com_debian_dists_jessie_main_Contents-amd64.gz | head
bin/bash                                                shells/bash
bin/bash-static                                         shells/bash-static
bin/rbash                                               shells/bash
etc/apparmor.d/abstractions/bash                        admin/apparmor
etc/bash.bashrc                                         shells/bash
etc/bash_completion                                     shells/bash-completion
etc/bash_completion.d/R                                 gnu-r/r-base-core
etc/bash_completion.d/_publican                         perl/publican
etc/bash_completion.d/aapt                              devel/aapt
etc/bash_completion.d/adb                               devel/android-tools-adb

real    0m0.013s
user    0m0.000s
sys     0m0.000s

$ time apt-file search bash | head
0install-core: /usr/share/bash-completion/completions/0install
0install-core: /usr/share/bash-completion/completions/0launch
aapt: /etc/bash_completion.d/aapt
acheck: /usr/share/doc/acheck/bash_completion
acl2-books: /usr/lib/acl2-6.5/books/misc/bash-bsd.o
acl2-books: /usr/lib/acl2-6.5/books/misc/bash.o
acl2-books: /usr/share/acl2-6.5/books/misc/bash-bsd.o
acl2-books: /usr/share/acl2-6.5/books/misc/bash.o
acl2-books-certs: /usr/share/acl2-6.5/books/misc/bash-bsd.cert
acl2-books-certs: /usr/share/acl2-6.5/books/misc/bash.cert

real    0m3.631s
user    0m3.268s
sys     0m0.280s
$ time python bash | head
shells/bash: bin/rbash
admin/apparmor: etc/apparmor.d/abstractions/bash
shells/bash: etc/bash.bashrc
shells/bash-completion: etc/bash_completion
gnu-r/r-base-core: etc/bash_completion.d/R
perl/publican: etc/bash_completion.d/_publican
devel/aapt: etc/bash_completion.d/aapt
devel/android-tools-adb: etc/bash_completion.d/adb
science/libadios-bin: etc/bash_completion.d/adios
utils/silversearcher-ag: etc/bash_completion.d/ag

real    0m2.375s
user    0m1.732s
sys     0m0.540s

Not too bad, I’m competitive with apt-file. Now of course, I’m missing some features but it’s a good result for a little experiment.

The code is on my GitHub, if you’re interested.

I’ll keep working on this and add support for yum/dnf’s formats and probably migrate to a real database backend.


  • The code initially worked, but then would crash when piped to head. The answer? You have to handle SIGPIPE on UNIX-like systems. See this StackOverflow post for more info

  • gzipping the database on a i5-based laptop took nearly 20 minutes. A more sane level of compression is a lot quicker, but the compressed size is about 64 megabytes

  • SQLite can read gzipped files with a proprietary extension. It’s a pity the sqlite3 Python module doesn’t accept file handles to database files, otherwise I could just wrap it in a gzipstream and I’d be good to go

  • Someone with better SQL skills could probably make the database import a lot faster. On my DigitalOcean instance, the initial import takes a few minutes.

Like what I do? Support me here

Need a drink? Try a Whiskey Sweet n’ Sour from The Cinnamon Scrolls

Silly Project of The Day: Find out when Bake Off is on next, via phone

I’ve recently been playing around with Plivo, which is a competitor to Twilio and lets you connect voice calls and text messages to HTTP endpoints.

I decided to put together a silly demo app that used the text-to-speech API.

First off let’s parse, which happens to be the page listing the next time The Great British Bake Off is on.

BBC’s HTML is pretty excellently marked up, so we’ll import BeautifulSoup and requests, then get to work.

import requests
from bs4 import BeautifulSoup
import os
import arrow

def get_next_bakeoff():
    url = ""

    if not os.path.isfile("bakeoff.cache"):
        bakeoff = requests.get(url).content
        open('bakeoff.cache', 'w').write(str(bakeoff))
        bakeoff = open('bakeoff.cache', 'r')
        content = BeautifulSoup(bakeoff, 'html.parser')
        elements = content.find('ol', 'highlight-box-wrapper')
        spans = elements.find_all('span')

I’ll save you some of the work here. The main program listing on that page is in an ordered-list (ol element). The interesting meta-data is in spans, so the lazy way is to just grab all of them and then filter from there.

Each program listing has a span with a position attribute so let’s use list comprehensions and magic.

positions = [span for span in spans if span.get('property') == 'position']

Now we have BeautifulSoup references for each program that has a ‘position’ in the list.

Next, let’s sort the list and grab the ‘startTime’ attribute for the latest episode

        positions.sort(key=lambda x: x.text)
        start_time = [p.find('h3') for p in positions[-1].parents if
                      p.find('h3') is not None and p.find('h3').get('property') == 'startDate'][0]['content']

Now we have a start_time of something like 2015-08-26T20:00:00+01:00 which is fantastic, because it’s a standard datetime, and it’s got a timezone offset. BBC are really making this easy for us.

Next, let’s use the excellent arrow Python module to convert the datetime into our local timezone (at the moment for me that’s Canada’s Pacific time)

        next_bakeoff = arrow.get(start_time).to('local') 

2015-08-26T20:00:00+01:00 becomes 2015-08-26T12:00:00-07:00, and we know I might be able to watch Bake Off in my lunch break on Wednesday, if I’m lucky.

        next_bakeoff = arrow.get(start_time).to('local')
        current_time ='local')
        if (next_bakeoff > current_time):
            return next_bakeoff

The text to speech part is really the easiest. Have a look at Plivo’s documentation
We could use Plivo’s XML library but at the time I didn’t know about this and I was fighting compilation issues in lxml, so again, I did this the easy way – "".format(). There’s not a lot of magic here – Plivo’s servers do the hard work. I’m setting en-GB so I get a voice with a distinctly British accent. The TTS voices are really nice, I think Plivo have shelled out the big bucks for Cepstral voices.

            xml = '''<Response>
                  <Speak language="en-GB" loop="1" voice="WOMAN">
                  The next episode of Bake Off is {0}

The call to .humanize() again comes from arrow and turns the timestamp into something nice like in 5 days
What I haven’t shown here is the Flask app that I’ve put my code in, so instead of return xml I do return Response(xml, mimetype='text/xml').

Plivo have posted a good example of a TTS application at

Left as an exercise to the reader is hooking up the application to a Plivo phone number. I just followed the tutorial in the Getting Started / Text to Speech on a Call section.

At 0.8 cents a call, it wouldn’t be a cheap toy to play around with in any significant amount, but it’s not going to break the bank, either.

The Great British Bake Off airs in the UK on BBC1 at 20:00 GMT.

If you’d like to support me and this blog, send bitcoins to 17RugTAi9LdxMUcgVhpWVRRvVsWg11P6V5 or check out my Support Me page.

If you’d like something completely different, try making some Chicken Avocado Alfredo from The Cinnamon Scrolls

Booting the Nexus 4 from “scratch”

I recently decided I’d like to see if it was possible to boot the Nexus 4 from scratch – the Android build system seems to use a prebuilt bootloader.img.

A few other people have wondered about this, but have since lost interest

If this was a PC I’d be looking at the GRUB source code or similar, but the Nexus 4 was initially a mystery. There’s no proper references to a bootloader in the AOSP source, but there used to be:

A listing of deleted files in GitThe Nexus 4 is significantly newer than the G1, so what the hell is booting Android these days?

For Qualcomm devices and a few others, the answer is the Little Kernel Bootloader, or LK for short. Qualcomm actually publish quite a nice guide for their DragonBoard, which happens to run the same SoC as the Nexus.

After futzing around for a while trying to find source code, I found it in some unknown branch of – seriously, I had to run

git log --all -- **/msm8960*

and checkout a commit that looked reasonable:

git branch --contains [hash]

gave me nothing

The next step was to cross compile the lk source for the ARM processor in the Nexus.

git clone --depth=1

export TOOLCHAIN_PREFIX=arm-eabi-
export PATH=$PATH:/home/ubuntu/arm-eabi-4.8/bin/
cd lk
make msm8960

…which produced quite a bit less than I was expecting

ubuntu@ip-172-30-0-141:~/lk/build-msm8960$ ls -lah
total 8.7M
drwxrwxr-x 9 ubuntu ubuntu 4.0K Jul 31 17:09 .
drwxrwxr-x 15 ubuntu ubuntu 4.0K Jul 31 17:08 ..
drwxrwxr-x 3 ubuntu ubuntu 4.0K Jul 31 17:08 app
-rw-rw-r-- 1 ubuntu ubuntu 282K Jul 31 17:09 appsboot.mbn
-rwxrwxr-x 1 ubuntu ubuntu 282K Jul 31 17:09 appsboot.raw
drwxrwxr-x 3 ubuntu ubuntu 4.0K Jul 31 17:08 arch
-rw-rw-r-- 1 ubuntu ubuntu 1.4K Jul 31 17:08 config.h
drwxrwxr-x 7 ubuntu ubuntu 4.0K Jul 31 17:08 dev
drwxrwxr-x 2 ubuntu ubuntu 4.0K Jul 31 17:08 kernel
drwxrwxr-x 7 ubuntu ubuntu 4.0K Jul 31 17:08 lib
-rwxrwxr-x 1 ubuntu ubuntu 1.7M Jul 31 17:09 lk
-rwxrwxr-x 1 ubuntu ubuntu 282K Jul 31 17:09 lk.bin
-rw-rw-r-- 1 ubuntu ubuntu 4.1M Jul 31 17:09 lk.debug.lst
-rw-rw-r-- 1 ubuntu ubuntu 2.0M Jul 31 17:09 lk.lst
-rw-rw-r-- 1 ubuntu ubuntu 46K Jul 31 17:09 lk.size
-rw-rw-r-- 1 ubuntu ubuntu 87K Jul 31 17:09 lk.sym
-rwxrwxr-x 1 ubuntu ubuntu 18K Jul 31 17:09 mkheader
drwxrwxr-x 4 ubuntu ubuntu 4.0K Jul 31 17:08 platform
-rw-rw-r-- 1 ubuntu ubuntu 2.1K Jul 31 17:09 system-onesegment.ld
drwxrwxr-x 3 ubuntu ubuntu 4.0K Jul 31 17:08 target

Nothing that looks like a bootloader.img there. I know that that particular file should start with “BOOTLDR!”, so let’s check for it.

grep -ir BOOTLDR

Nothing. Shit. Looks like we’ll have to do this the hard way.

If you download the stock images from Google, you’ll notice that they haven’t been changed very often:

897b2c89ab564caf5bb4fe6b90d3526b occam-jdq39/bootloader-mako-makoz10o.img
d56825a3b22b2fe9334b5efa4d117206 occam-jwr66y/bootloader-mako-makoz20i.img
ada04d55afd815674d586a60877587e8 occam-kot49h/bootloader-mako-makoz30d.img
ada04d55afd815674d586a60877587e8 occam-krt16s/bootloader-mako-makoz20i.img
ada04d55afd815674d586a60877587e8 occam-ktu84l/bootloader-mako-makoz30d.img
ada04d55afd815674d586a60877587e8 occam-ktu84p/bootloader-mako-makoz30d.img
a66f24fa47871e6e68c34952a6fb9587 occam-lmy47o/bootloader-mako-makoz30f.img
a66f24fa47871e6e68c34952a6fb9587 occam-lmy47v/bootloader-mako-makoz30f.img
a66f24fa47871e6e68c34952a6fb9587 occam-lrx21t/bootloader-mako-makoz30f.img
a66f24fa47871e6e68c34952a6fb9587 occam-lrx22c/bootloader-mako-makoz30f.img

I decided to concentrate on the makoz30f binary as it’s the same one my Nexus 4 is running. I knew there was an old tool called split_bootimg that could pull Android images apart, so I gave it a shot.

ubuntu@ip-172-30-0-141:~$ perl ~/images/occam-lmy47v/bootloader-mako-makoz30f.img
Android Magic not found in /home/ubuntu/images/occam-lmy47v/bootloader-mako-makoz30f.img. Giving up

Damnit. That tool works on kernel images, which are pretty similar across devices and manufacturers, not bootloaders, which are manufacturer and even device-specific.

I tried staring at split_bootimg’s Perl source and at the bootloader in a hex editor, but nothing immediately stood out, and I’m impatient. To Google!

…which felt a bit like cheating – I Googled “bootloader.img format” and got

ubuntu@ip-172-30-0-141:~$ ./bootunldr ~/images/occam-lmy47v/bootloader-mako-makoz30f.img
#0 sbl1 (94440 bytes)
#1 sbl2 (145448 bytes)
#2 sbl3 (1430160 bytes)
#3 tz (193388 bytes)
#4 rpm (144892 bytes)
#5 aboot (309856 bytes)

048cb7cffe6d1e68d3ed88f2d8fefa31 aboot
aad39fa6f8c661fd21afe01c47f3ddae rpm
1e752230dd972f7d30e5dc508e724842 sbl1
150f5f58c44ebaffb01923beed5260a1 sbl2
890e8e606af016e3ae239af1494821c0 sbl3
1d5c541be1018bb9b6f0183f4c60745c tz

Holy shit, progress!

At this point I noticed a conspicuous lack of source code. LK is the “aboot” part up there, but I didn’t have a way to build any of the others.

Emailing Qualcomm wasn’t a lot of help, because I haven’t won the lottery recently, and I’m allergic to NDAs.


Having grown tired of large corporations (I also emailed LG), I then rewrote the unpacker in Python, because I don’t trust any C code that I write. Having verified that I got the same binaries as above, I wrote a re-packer that could take the proprietary blobs from Qualcomm and add a Little Kernel/aboot file that I built myself. It can unpack the bootloader.img from the stock image and repack it, and the MD5s even match!

The source is here:

This is where I stopped: I would not recommend anyone create a custom bootloader for the Nexus 4 and attempt to flash it. The reason, and the reason I call this little project a failure is that there’s no easy way to recover the bootloader if it fails to work. The bootloader controls access to adb fastboot, and if that doesn’t work you’re pretty much screwed. See my update for the other problem (oops!)

If you do brick your Nexus 4, your options are:

While I was researching this article I found the following reference boards in varying states of availability:

  • Qualcomm APQ604 reference board
  • Compulab QS600
  • Inforce IF6410

You could probably do a lot of the same testing on these boards and recovery wouldn’t be quite as difficult.

Unfortunately, Inforce requires a purchase before you can download the BSP, CompuLab lists the bootloader source as “coming soon” (and also don’t really want to talk to you unless you are going to buy something)

Addendum: there’s a minor problem with my plans –
Basically this means I need another angle of attack. Luckily another Redditor pointed me towards – there’s no instructions so it looks like I’ll have to write a part 2.

If you want to fund more projects like this, send Bitcoins to 17RugTAi9LdxMUcgVhpWVRRvVsWg11P6V5

If cooking food, rather than ROMs is more your thing, try a recipe from The Cinnamon Scrolls

“Fixing” a buggy Windows driver with a Virtual Machine

Recently, a friend of mine built me a debug cable for the Nexus 4. It turns out that the headphone jack can be turned into a UART cable by applying the right resistance, but that’s a story for another day.

My troubles started as soon as I plugged it in.1-error

So, why is the USB adapter having an existential crisis? The answer seems to be that Prolific really wants you to buy a newer device.

Rebooting into a Linux distro would be an option, but what if you want to work in Windows for whatever reason? There is a “fixed” driver available, but this seems to cause quite a lot of instability for me with BSODs at random intervals.

I knew there was nothing wrong with the device itself, so I turned to virtualization for a solution. VirtualBox has the ability to intercept a USB device on a Windows host and provide it to the guest – but that comes at the cost of the RAM needed to run a whole distro like Debian.

What if I could run just enough to load a USB driver and something like netcat? (which can be used to get data to and from a serial port)

Enter Buildroot. It’s a set of scripts which can build all kinds of embedded system images – it’s most commonly used to build modem filesystems for MIPS and ARM based devices but it can also be used to create images that will work on x86.

I was able to hack a quick version of this together but after talking with people in #buildroot on Freenode, I spent some time learning how to use the build system properly. There’s a way to store all of your configuration changes separately from the buildroot code itself, described in Building Out of Tree in the manual. With a lot of tinkering, I now have a git repo that can be checked out and rebuilt very easily.

I worked on this for about a week and a half and now have an image that can be booted in VirtualBox with the PL2302 drivers loaded automagically. This means I can let VirtualBox take over the serial device and it won’t crash Windows.

I also wrote a script to take an empty image created with dd, partition and format it, then make it bootable with extlinux and add my working kernel/initramfs to it.

When I tested my first versions of this it would boot with about 28mb of RAM allocated to the VM, but let’s say the minimum is 32mb to be safe, which is small enough to be run on my laptop without impacting performance.

Building this system went from a couple of hours work to a week of troubleshooting and asking questions in IRC. It’s probably bigger than my goal now and in retrospect maybe I should have just dual booted to work with the serial converter instead. Although, if I’d done that I wouldn’t know buildroot nearly as well as I know it now.

Linux booting and detecting a USB to serial converter

Picocom and microcom are installed in this tiny system, so before networking it up I tested it out with

microcom -s 115200 /dev/ttyUSB0

and… nothing. Panic – had the soldering on the cable held up to being packed in a suitcase? What had I done wrong? I installed the Windows driver mentioned above and connected via PuTTY – also nothing. Not even line noise at 9600 baud. Just before I was going to give up for the day, I noticed that the Nexus 4 bumper case was stopping the cable going all the way in. With the case gone, the cable and serial adapter worked fine and my job was done.

I haven’t tested multidirectional communication yet, I don’t think the Nexus 4 is accepting input on the UART but the following commands let me connect to on port 1234 and read the console output

stty -F /dev/ttyUSB0 raw
stty -F /dev/ttyUSB0 115200
nc -l -p 1234 < /dev/ttyUSB0


When I’m not writing code or causing trouble, I can be found eating food from The Cinnamon Scrolls