Resistance is futile: WhatsApp for Android hangs on “looking for backups”

TLDR: WhatsApp for Android 2.12.453 won’t really function unless it has access to read your contacts. Scroll down if you like story with your solutions.

I was finally pestered into installing WhatsApp after resisting for ages. Life was much simpler when I only needed one application to contact everyone, whether that was IRC, email, Pidgin (when it was Gaim) or even Trillian at the start of The Great Instant Messaging Fragmentation. Now I need multiple battery draining applications to get in touch with various people. It starts to make SMS look attractive.

Firstly, I got this encouraging message:

Anyway, once WhatsApp had “verified” my phone number, it stopped at a screen like this

A loading screen that says “looking for backups”

I left it for a few minutes but it didn’t seem to help. Restarting the app / forcing it to close didn’t seem to help either. Some Googling revealed that it was “looking for backups” in Google Drive. I didn’t have Google Drive installed on my device so I grabbed it just in case. That didn’t help so I begrudgingly fired up adb logcat and got to work.

The first problem with logcat is that without any arguments, log messages will scroll past far too quickly. I was on a borrowed laptop so I didn’t want to install the full Android suite to get the GUI log viewer.

The documentation doesn’t say that you can filter by package (app) name, only by a “tag”. Bugger it, grep will do. After a while I narrowed it down to the following messages:

03-07 13:12:03.212 798 1493 I ActivityManager: START u0 {act=action_show_restore_one_time_setup cmp=com.whatsapp/.gdrive.GoogleDriveActivity} from uid 10215 on display 0
03-07 13:12:03.270 798 1485 W ActivityManager: Permission Denial: opening provider from ProcessRecord{b2ede58 6856:com.whatsapp/u0a215} (pid=6856, uid=10215) requires android.permission.READ_CONTACTS or android.permission.WRITE_CONTACTS
03-07 13:12:04.012 798 820 I ActivityManager: Displayed com.whatsapp/.gdrive.GoogleDriveActivity: +704ms (total +3s737ms)

No exceptions there but it definitely didn’t display anything to do with Google Drive. Fine, I’ll let you scan my contacts, WhatsApp (sorry to the two people I know who will be angry at me for their phone numbers being scanned).


After swiping away / force stopping the app, restart it. You can skip the permission prompt here, it’ll work fine. You can even go back and deny permissions to the contacts again, but WhatsApp is very insistent that it will read (and write!) to your contact list. Denying the permission after the fact will give you a wobbly app and the following in your logs:

03-07 13:54:25.602 798 1015 W ActivityManager: Permission Denial: opening provider from ProcessRecord{f618cd3 14391:com.whatsapp/u0a217} (pid=14391, uid=10217) requires android.permission.READ_CONTACTS or android.permission.WRITE_CONTACTS
03-07 13:59:35.874 798 1939 I ActivityManager: START u0 {cmp=com.whatsapp/.RequestPermissionActivity (has extras)} from uid 10217 on display 0
03-07 13:59:36.161 798 820 I ActivityManager: Displayed com.whatsapp/.RequestPermissionActivity: +249ms
03-07 13:59:54.549 14391 14441 E WhatsApp: at com.whatsapp.util.Log.a(
03-07 13:59:54.549 14391 14441 E WhatsApp: at com.whatsapp.util.Log.a(
03-07 13:59:54.549 14391 14441 E WhatsApp: at com.whatsapp.util.Log.i(
03-07 13:59:54.549 14391 14441 E WhatsApp: at com.whatsapp.ig.a(
03-07 13:59:54.549 14391 14441 E WhatsApp: at com.whatsapp.akz.a(
03-07 13:59:54.549 14391 14441 E WhatsApp: at com.whatsapp.apu.a(
03-07 13:59:54.549 14391 14441 E WhatsApp: at com.whatsapp.apu.doInBackground(
03-07 13:59:54.549 14391 14441 E WhatsApp: at

Enjoy the future of messaging! I’ll be over hanging out with the few remaining IRC users on Freenode if you need me.

FreeNAS Jail: “pkg: Unable to find catalogs”

When creating a new jail in FreeNAS 9.3, you might get an error after running pkg install (and the catalogues don’t seem to be saved at all

# pkg install cfdisk gdisk
Updating repository catalogue
digests.txz                         100% 2080KB 130.0KB/s 170.4KB/s   00:16
packagesite.txz                     100% 5375KB 141.5KB/s 235.1KB/s   00:38
pkg: package field incomplete: comment
Incremental update completed, 24491 packages processed:
0 packages updated, 0 removed and 24491 added.
pkg: Unable to find catalogs


Turns out the jail config is wrong, at least it is if you built your jail using (I’d found a reference to this and not realised the format had changed between releases. Oops.)

You should be using and the associated mtree file but the fix if you’ve used the 9.2 config is to

 cat > /usr/local/etc/pkg/repos/FreeBSD.conf
FreeBSD: {
url: "pkg+${ABI}/latest",
mirror_type: "srv",
enabled: yes
} #ctrl+d to end

and then run pkg update again.

It looks like FreeNAS caches and protects the jail templates, so I recommend creating a new template with the correct URL. You should probably remove and recreate the template in the UI.

FreeNAS 9.3 on the SuperMicro 5028A-TN4 / A1SRi-2758F

Warning: work in progress article

If you have this board you probably have IPMI. Use the “Virtual Storage” option to mount the FreeNAS ISO:


You may have to screw around in the BIOS (actually UEFI, but they do a great job of emulating the terrible UX from 20 years ago) to get the “ATEN Virtual CDROM” to show up as a boot option. You may also be able to boot the installer from one of the rear USB ports but this was an exercise in frustration.

Once you get into the FreeNAS installer, drop to a shell and run

kldload xhci.ko

You should get output similar to what’s shown below.


Type exit to get back to the installer and continue onwards. When you hit install, you should get to choose your USB3 device – it’s probably right down the bottom so scroll down.


Here’s where is unfortunately wrong. I’m not sure whether adding

set kFreeBSD.xhci_load=YES

to GRUB’s options ever worked, but it certainly didn’t help me. So I tried something else.


At the FreeNAS/GRUB boot menu, hit ‘e’ to edit the FreeNAS entry, then add

kfreebsd_module_elf /ROOT/default/@/boot/kernel/xhci.ko

. At the moment I’m not sure whether position is important, but I added it after the line that loads ispfw.ko.

Unfortunately until we can get to grub.cfg or loader.conf we’ll have to re-enter that line every time we want to boot from USB3.

2015, the year of the Never Ending Open Source To Do List

This year was the year I took on way too much. It’s so easy to hit Fork on a GitHub repository, or to raise an issue, to try to help out on someone else’s project but eventually it all catches up with you.

I feel most guilty about bailing on WheelMapespecially as building accessibility is something that affects me personally. I was supposed to help implement a new search API but I got distracted building a Dockerfile to make my development setup easier. This eventually ended up sapping my enthusiasm for the project, then (paid) work got crazy and I had to step away.

I feel like I have open-source-ADD. There are so many cool projects to help out on, so many bugs that could be fixed if I could just spend 30 minutes on them… then another 30 minutes, and another until it’s Sunday at 3am and I really should sleep, lest my nighttime exploits affect my ability to pay for my Internet connection.

I didn’t really have a term for the way I was approaching open source / volunteer work until I replied to someone on Hacker News. The Never Ending Open Source To Do List. It’s like making the minimum repayments on a credit card while continuing to buy every game in a Steam Sale. It’s not going to end well – eventually you have to start paying interest.

in an article titled A Lot Happens, Jesse writes

You can’t be emotionally all in on everything. You can’t make another 24 hours appear to be “present” for everything. Instead, I stole time and ran my emotional credit card like it was limitless.

I guess reading about what Jesse went through stuck with me. I am NOT comparing our two experiences, but I can definitely relate to some of the patterns of behaviour (and I feel a bit guilty because I know I’ve asked favours of Jesse in the past).

You should go and read that one, and the followup. This article will still be here when you get back. It should serve as a warning for everyone who has stayed up late at night, somewhere near the Ballmer Peak, trying to get one more patch in or one more mailing-list discussion sorted.

In my perfect world, someone would pay for me to be Open Source Batman, swooping in on every call for contributors to help get projects going again. I mean, I was able to help PureDarwin by sending a few well-placed emails, wasn’t I?

In reality, I’m trading in my free time for a bit of an ego boost when issues get closed or patches get merged. My free time is a limited resource and sometimes it’d probably be better used getting fit or learning TypeScript.

I’m lucky I’ve had a bit of spare cash this year so I’ve been able to just spin up another instance to run another build for whatever project I’m working on this second, without putting too much thought into it. Unfortunately, the Australian dollar took a dive this year and most of the services I rely on are billed in US dollars. Renewals for things like my password manager, my VPS and my development tools came up and I found myself facing a bit of a shortfall.

Next year I’ll look into setting up a Patreon or something to cover the cost of the AWS instances I’ll be using for a lot of development as I’ve just turned off my DigitalOcean instance due to the falling Australian dollar.

But first I’ve got to do something worthy of being paid for – thus the cycle of the Never Ending Open Source To Do List starts again.

I hope that whatever job I find next year (hire me!) allows me to contribute to open source. I really do derive a lot of happiness from contributing to open source, but it’s a delicate balance between extracting a bit of a buzz from hitting the Fork button and getting enough sleep.

I don’t claim to have all the answers, and you should go read Ashe Dryden’s post on OSS and unpaid labor if you are also struggling with your open source contributions.

Here’s to knocking off a few more items on the List in 2016.

Malvertising, on my StackOverflow? It’s more likely than you think

Context / Link ads on StackOverflow

I’m pretty sure the capitalised, pop up generating links in the screenshot above aren’t covered in the StackExchange-backed AcceptableAds initiative.

Now, apart from snarky opening lines, I’m not blaming StackOverflow for this. After panicking that my newly flashed Nexus 5 was infected with some kind of malware, I fired up tPacketCapture to check what exactly was going on. Unfortunately, this uses the Android VPN API to capture traffic without root, and Android thinks it’s a good idea to send DNS over the unencrypted connection by default – so I couldn’t check for DNS poisoning or anything like that.

I’ve dropped the PCAP into Fiddler4 for ease of viewing. Can you see anything wrong here?

Dodgy traffic listing

As far as I can tell, a sketchy advertiser is on either ScorecardResearch or QuantServ. The wonderfully named hatredsmotorcyclist is serving some kind of obfuscated JavaScript related to DNSUnlocker which is a known malware provider, but not normally on Android. In a desktop browser, that javascript generates a whole lot of fake virus scanner popups which are sure to completely screw up your PC. I should probably run them in a VM at some point.

I can’t reproduce the link ads from the first screenshot, but I’ve posted a beautified version of the DNSUnlocker javascript as a Gist. I don’t recommend running it. I did – in the console of Chrome on GitHub, which means I’m partially protected by the Content Security Policies that GitHub sets.

You can see what the first script tried to load in the second file of that gist, but I’m sorry I couldn’t format it very well. You can see something called “re-markit” which I’m going to guess did the “marking” in the first screenshot.

Whether I was DNS poisoned or not I won’t know, but this is all the more reason to run an ad-blocker, and be very careful about what you let through – an ad network that’s benign today could be serving the latest 0-day tomorrow. I’m lucky that all I got was some crappy ads and a Play Store redirect.

A cleaned-up version of the Fiddler session archive is hosted here – malvertising.saz (sorry, I had to rename it for WordPress to allow the upload). I’d appreciate any help doing a more detailed analysis and reporting those strange domains to the registrars and the hosters. Oh, and if you see something like this on your Android, take a packet capture, then clear your browser cache. If you’re rooted, you may want to install AdBlock from the F-Droid store.

Like what I do? Support me here

Need a drink? Try a Whiskey Sweet n’ Sour from The Cinnamon Scrolls

Reimplementing apt-file, badly

I recently thought of a idea to write an apt-file (yum whatprovides) that works across multiple distros. If you’ve never used a utility like this, they’ll tell you which package provides a certain file. This would help when writing README files to work out what dependencies are needed in each distro.

After a couple of hours of fiddling around (read: procrastinating), I had a working import from the repository Contents.gz file that’s on every Debian mirror. The thing that struck me was the file sizes:

File Size
Contents-amd64.gz 26,721kb
Contents-amd64 378,971kb
Contents-amd64.sqlite (naive import with lots of duplication) 452,146kb
Contents-amd64.sqlite (de-normalised) 330,274kb
Contents-amd64.sqlite.gz* 55,858kb

Then I got side tracked…

Contents-amd64.gz is what’s downloaded when you run apt-file update. First off I basically ran a for each line in file, split it into package name and file name, then inserted it straight into SQLite.

I then remembered some introductory database theory and ‘de-normalised’ the data – each package becomes an integer and then each row only stores a reference to that integer.

The downside to this is that I needed to run two imports. Once to get the list of packages and files, and then again to link the packages to the list of files. In SQLite, this means creating copies of tables, dropping the duplicate one and then running VACUUM to reclaim the space. The result of this is that I (barely) beat the original file for storage efficiency.

Next, what about search? Could I really beat grep?

I think caching and load affected some of the results – a couple of runs a few hours apart produced wildly different results. All of my tests were run on a 512mb DigitalOcean instance.

$ time zgrep bash /var/cache/apt/apt-file/mirrors.digitalocean.com_debian_dists_jessie_main_Contents-amd64.gz | head
bin/bash                                                shells/bash
bin/bash-static                                         shells/bash-static
bin/rbash                                               shells/bash
etc/apparmor.d/abstractions/bash                        admin/apparmor
etc/bash.bashrc                                         shells/bash
etc/bash_completion                                     shells/bash-completion
etc/bash_completion.d/R                                 gnu-r/r-base-core
etc/bash_completion.d/_publican                         perl/publican
etc/bash_completion.d/aapt                              devel/aapt
etc/bash_completion.d/adb                               devel/android-tools-adb

real    0m0.013s
user    0m0.000s
sys     0m0.000s

$ time apt-file search bash | head
0install-core: /usr/share/bash-completion/completions/0install
0install-core: /usr/share/bash-completion/completions/0launch
aapt: /etc/bash_completion.d/aapt
acheck: /usr/share/doc/acheck/bash_completion
acl2-books: /usr/lib/acl2-6.5/books/misc/bash-bsd.o
acl2-books: /usr/lib/acl2-6.5/books/misc/bash.o
acl2-books: /usr/share/acl2-6.5/books/misc/bash-bsd.o
acl2-books: /usr/share/acl2-6.5/books/misc/bash.o
acl2-books-certs: /usr/share/acl2-6.5/books/misc/bash-bsd.cert
acl2-books-certs: /usr/share/acl2-6.5/books/misc/bash.cert

real    0m3.631s
user    0m3.268s
sys     0m0.280s
$ time python bash | head
shells/bash: bin/rbash
admin/apparmor: etc/apparmor.d/abstractions/bash
shells/bash: etc/bash.bashrc
shells/bash-completion: etc/bash_completion
gnu-r/r-base-core: etc/bash_completion.d/R
perl/publican: etc/bash_completion.d/_publican
devel/aapt: etc/bash_completion.d/aapt
devel/android-tools-adb: etc/bash_completion.d/adb
science/libadios-bin: etc/bash_completion.d/adios
utils/silversearcher-ag: etc/bash_completion.d/ag

real    0m2.375s
user    0m1.732s
sys     0m0.540s

Not too bad, I’m competitive with apt-file. Now of course, I’m missing some features but it’s a good result for a little experiment.

The code is on my GitHub, if you’re interested.

I’ll keep working on this and add support for yum/dnf’s formats and probably migrate to a real database backend.


  • The code initially worked, but then would crash when piped to head. The answer? You have to handle SIGPIPE on UNIX-like systems. See this StackOverflow post for more info

  • gzipping the database on a i5-based laptop took nearly 20 minutes. A more sane level of compression is a lot quicker, but the compressed size is about 64 megabytes

  • SQLite can read gzipped files with a proprietary extension. It’s a pity the sqlite3 Python module doesn’t accept file handles to database files, otherwise I could just wrap it in a gzipstream and I’d be good to go

  • Someone with better SQL skills could probably make the database import a lot faster. On my DigitalOcean instance, the initial import takes a few minutes.

Like what I do? Support me here

Need a drink? Try a Whiskey Sweet n’ Sour from The Cinnamon Scrolls

Silly Project of The Day: Find out when Bake Off is on next, via phone

I’ve recently been playing around with Plivo, which is a competitor to Twilio and lets you connect voice calls and text messages to HTTP endpoints.

I decided to put together a silly demo app that used the text-to-speech API.

First off let’s parse, which happens to be the page listing the next time The Great British Bake Off is on.

BBC’s HTML is pretty excellently marked up, so we’ll import BeautifulSoup and requests, then get to work.

import requests
from bs4 import BeautifulSoup
import os
import arrow

def get_next_bakeoff():
    url = ""

    if not os.path.isfile("bakeoff.cache"):
        bakeoff = requests.get(url).content
        open('bakeoff.cache', 'w').write(str(bakeoff))
        bakeoff = open('bakeoff.cache', 'r')
        content = BeautifulSoup(bakeoff, 'html.parser')
        elements = content.find('ol', 'highlight-box-wrapper')
        spans = elements.find_all('span')

I’ll save you some of the work here. The main program listing on that page is in an ordered-list (ol element). The interesting meta-data is in spans, so the lazy way is to just grab all of them and then filter from there.

Each program listing has a span with a position attribute so let’s use list comprehensions and magic.

positions = [span for span in spans if span.get('property') == 'position']

Now we have BeautifulSoup references for each program that has a ‘position’ in the list.

Next, let’s sort the list and grab the ‘startTime’ attribute for the latest episode

        positions.sort(key=lambda x: x.text)
        start_time = [p.find('h3') for p in positions[-1].parents if
                      p.find('h3') is not None and p.find('h3').get('property') == 'startDate'][0]['content']

Now we have a start_time of something like 2015-08-26T20:00:00+01:00 which is fantastic, because it’s a standard datetime, and it’s got a timezone offset. BBC are really making this easy for us.

Next, let’s use the excellent arrow Python module to convert the datetime into our local timezone (at the moment for me that’s Canada’s Pacific time)

        next_bakeoff = arrow.get(start_time).to('local') 

2015-08-26T20:00:00+01:00 becomes 2015-08-26T12:00:00-07:00, and we know I might be able to watch Bake Off in my lunch break on Wednesday, if I’m lucky.

        next_bakeoff = arrow.get(start_time).to('local')
        current_time ='local')
        if (next_bakeoff > current_time):
            return next_bakeoff

The text to speech part is really the easiest. Have a look at Plivo’s documentation
We could use Plivo’s XML library but at the time I didn’t know about this and I was fighting compilation issues in lxml, so again, I did this the easy way – "".format(). There’s not a lot of magic here – Plivo’s servers do the hard work. I’m setting en-GB so I get a voice with a distinctly British accent. The TTS voices are really nice, I think Plivo have shelled out the big bucks for Cepstral voices.

            xml = '''<Response>
                  <Speak language="en-GB" loop="1" voice="WOMAN">
                  The next episode of Bake Off is {0}

The call to .humanize() again comes from arrow and turns the timestamp into something nice like in 5 days
What I haven’t shown here is the Flask app that I’ve put my code in, so instead of return xml I do return Response(xml, mimetype='text/xml').

Plivo have posted a good example of a TTS application at

Left as an exercise to the reader is hooking up the application to a Plivo phone number. I just followed the tutorial in the Getting Started / Text to Speech on a Call section.

At 0.8 cents a call, it wouldn’t be a cheap toy to play around with in any significant amount, but it’s not going to break the bank, either.

The Great British Bake Off airs in the UK on BBC1 at 20:00 GMT.

If you’d like to support me and this blog, send bitcoins to 17RugTAi9LdxMUcgVhpWVRRvVsWg11P6V5 or check out my Support Me page.

If you’d like something completely different, try making some Chicken Avocado Alfredo from The Cinnamon Scrolls