Enforce changing of passwords in a Debian preseed automated install

If you’re setting up an automated Debian install, your users and passwords will be set up something like

 passwd passwd/root-login boolean false
 passwd passwd/make-user boolean true
 passwd passwd/user-fullname string Adam Baxter
 passwd passwd/username string voltagex
 passwd passwd/user-password string changeme
 passwd passwd/user-password-again string changeme
#you can also use user-password-crypted and supply a hash of the password, 
#see this StackExchange question for details

in your preseed.cfg. This example would set up a user with a username of voltagex and a not-very-secure password of changeme, as well as sudo permission.

The trick with this is you can actually enforce password policies at the same time with a late-command that runs just before the system reboots:

preseed         preseed/late_command    string  in-target passwd --expire voltagex

This will set the password to “expired” and result in the following when the user first logs in:

A screenshot showing that the password for the new user has expired

Resistance is futile: WhatsApp for Android hangs on “looking for backups”

TLDR: WhatsApp for Android 2.12.453 won’t really function unless it has access to read your contacts. Scroll down if you like story with your solutions.

I was finally pestered into installing WhatsApp after resisting for ages. Life was much simpler when I only needed one application to contact everyone, whether that was IRC, email, Pidgin (when it was Gaim) or even Trillian at the start of The Great Instant Messaging Fragmentation. Now I need multiple battery draining applications to get in touch with various people. It starts to make SMS look attractive.

Firstly, I got this encouraging message:
xuhvein

Anyway, once WhatsApp had “verified” my phone number, it stopped at a screen like this

wp-1457321129729.png
A loading screen that says “looking for backups”

I left it for a few minutes but it didn’t seem to help. Restarting the app / forcing it to close didn’t seem to help either. Some Googling revealed that it was “looking for backups” in Google Drive. I didn’t have Google Drive installed on my device so I grabbed it just in case. That didn’t help so I begrudgingly fired up adb logcat and got to work.

The first problem with logcat is that without any arguments, log messages will scroll past far too quickly. I was on a borrowed laptop so I didn’t want to install the full Android suite to get the GUI log viewer.

The documentation doesn’t say that you can filter by package (app) name, only by a “tag”. Bugger it, grep will do. After a while I narrowed it down to the following messages:

03-07 13:12:03.212 798 1493 I ActivityManager: START u0 {act=action_show_restore_one_time_setup cmp=com.whatsapp/.gdrive.GoogleDriveActivity} from uid 10215 on display 0
03-07 13:12:03.270 798 1485 W ActivityManager: Permission Denial: opening provider com.android.providers.contacts.ContactsProvider2 from ProcessRecord{b2ede58 6856:com.whatsapp/u0a215} (pid=6856, uid=10215) requires android.permission.READ_CONTACTS or android.permission.WRITE_CONTACTS
03-07 13:12:04.012 798 820 I ActivityManager: Displayed com.whatsapp/.gdrive.GoogleDriveActivity: +704ms (total +3s737ms)

No exceptions there but it definitely didn’t display anything to do with Google Drive. Fine, I’ll let you scan my contacts, WhatsApp (sorry to the two people I know who will be angry at me for their phone numbers being scanned).

wp-1457320243228.png

After swiping away / force stopping the app, restart it. You can skip the permission prompt here, it’ll work fine. You can even go back and deny permissions to the contacts again, but WhatsApp is very insistent that it will read (and write!) to your contact list. Denying the permission after the fact will give you a wobbly app and the following in your logs:

03-07 13:54:25.602 798 1015 W ActivityManager: Permission Denial: opening provider com.android.providers.contacts.ContactsProvider2 from ProcessRecord{f618cd3 14391:com.whatsapp/u0a217} (pid=14391, uid=10217) requires android.permission.READ_CONTACTS or android.permission.WRITE_CONTACTS
03-07 13:59:35.874 798 1939 I ActivityManager: START u0 {cmp=com.whatsapp/.RequestPermissionActivity (has extras)} from uid 10217 on display 0
03-07 13:59:36.161 798 820 I ActivityManager: Displayed com.whatsapp/.RequestPermissionActivity: +249ms
03-07 13:59:54.549 14391 14441 E WhatsApp: at com.whatsapp.util.Log.a(Log.java:244)
03-07 13:59:54.549 14391 14441 E WhatsApp: at com.whatsapp.util.Log.a(Log.java:240)
03-07 13:59:54.549 14391 14441 E WhatsApp: at com.whatsapp.util.Log.i(Log.java:56)
03-07 13:59:54.549 14391 14441 E WhatsApp: at com.whatsapp.ig.a(ig.java:847)
03-07 13:59:54.549 14391 14441 E WhatsApp: at com.whatsapp.akz.a(akz.java:307)
03-07 13:59:54.549 14391 14441 E WhatsApp: at com.whatsapp.apu.a(apu.java:28)
03-07 13:59:54.549 14391 14441 E WhatsApp: at com.whatsapp.apu.doInBackground(apu.java:29)
03-07 13:59:54.549 14391 14441 E WhatsApp: at com.whatsapp.util.cq.run(cq.java:2)

Enjoy the future of messaging! I’ll be over hanging out with the few remaining IRC users on Freenode if you need me.

FreeNAS Jail: “pkg: Unable to find catalogs”

When creating a new jail in FreeNAS 9.3, you might get an error after running pkg install (and the catalogues don’t seem to be saved at all

# pkg install cfdisk gdisk
Updating repository catalogue
digests.txz                         100% 2080KB 130.0KB/s 170.4KB/s   00:16
packagesite.txz                     100% 5375KB 141.5KB/s 235.1KB/s   00:38
pkg: package field incomplete: comment
Incremental update completed, 24491 packages processed:
0 packages updated, 0 removed and 24491 added.
pkg: Unable to find catalogs

WTF?

Turns out the jail config is wrong, at least it is if you built your jail using http://download.freenas.org/jails/9.2/x64/freenas-standard-9.2-RELEASE.tgz (I’d found a reference to this and not realised the format had changed between releases. Oops.)

You should be using http://download.freenas.org/jails/9.3/x64/freenas-standard-9.3-RELEASE.tgz and the associated mtree file but the fix if you’ve used the 9.2 config is to

 cat > /usr/local/etc/pkg/repos/FreeBSD.conf
FreeBSD: {
url: "pkg+http://pkg.FreeBSD.org/${ABI}/latest",
mirror_type: "srv",
enabled: yes
} #ctrl+d to end

and then run pkg update again.

It looks like FreeNAS caches and protects the jail templates, so I recommend creating a new template with the correct URL. You should probably remove and recreate the template in the UI.

FreeNAS 9.3 on the SuperMicro 5028A-TN4 / A1SRi-2758F

Warning: work in progress article

If you have this board you probably have IPMI. Use the “Virtual Storage” option to mount the FreeNAS ISO:

ipmi-virtual-storage.png

You may have to screw around in the BIOS (actually UEFI, but they do a great job of emulating the terrible UX from 20 years ago) to get the “ATEN Virtual CDROM” to show up as a boot option. You may also be able to boot the installer from one of the rear USB ports but this was an exercise in frustration.

Once you get into the FreeNAS installer, drop to a shell and run

kldload xhci.ko

You should get output similar to what’s shown below.

success

Type exit to get back to the installer and continue onwards. When you hit install, you should get to choose your USB3 device – it’s probably right down the bottom so scroll down.

success2

Here’s where https://forums.freenas.org/index.php?threads/problems-installing-9-3.25516/ is unfortunately wrong. I’m not sure whether adding

set kFreeBSD.xhci_load=YES

to GRUB’s options ever worked, but it certainly didn’t help me. So I tried something else.

maybesuccess3

At the FreeNAS/GRUB boot menu, hit ‘e’ to edit the FreeNAS entry, then add

kfreebsd_module_elf /ROOT/default/@/boot/kernel/xhci.ko

. At the moment I’m not sure whether position is important, but I added it after the line that loads ispfw.ko.

Unfortunately until we can get to grub.cfg or loader.conf we’ll have to re-enter that line every time we want to boot from USB3.

2015, the year of the Never Ending Open Source To Do List

This year was the year I took on way too much. It’s so easy to hit Fork on a GitHub repository, or to raise an issue, to try to help out on someone else’s project but eventually it all catches up with you.

I feel most guilty about bailing on WheelMapespecially as building accessibility is something that affects me personally. I was supposed to help implement a new search API but I got distracted building a Dockerfile to make my development setup easier. This eventually ended up sapping my enthusiasm for the project, then (paid) work got crazy and I had to step away.

I feel like I have open-source-ADD. There are so many cool projects to help out on, so many bugs that could be fixed if I could just spend 30 minutes on them… then another 30 minutes, and another until it’s Sunday at 3am and I really should sleep, lest my nighttime exploits affect my ability to pay for my Internet connection.

I didn’t really have a term for the way I was approaching open source / volunteer work until I replied to someone on Hacker News. The Never Ending Open Source To Do List. It’s like making the minimum repayments on a credit card while continuing to buy every game in a Steam Sale. It’s not going to end well – eventually you have to start paying interest.

in an article titled A Lot Happens, Jesse writes

You can’t be emotionally all in on everything. You can’t make another 24 hours appear to be “present” for everything. Instead, I stole time and ran my emotional credit card like it was limitless.

I guess reading about what Jesse went through stuck with me. I am NOT comparing our two experiences, but I can definitely relate to some of the patterns of behaviour (and I feel a bit guilty because I know I’ve asked favours of Jesse in the past).

You should go and read that one, and the followup. This article will still be here when you get back. It should serve as a warning for everyone who has stayed up late at night, somewhere near the Ballmer Peak, trying to get one more patch in or one more mailing-list discussion sorted.

In my perfect world, someone would pay for me to be Open Source Batman, swooping in on every call for contributors to help get projects going again. I mean, I was able to help PureDarwin by sending a few well-placed emails, wasn’t I?

In reality, I’m trading in my free time for a bit of an ego boost when issues get closed or patches get merged. My free time is a limited resource and sometimes it’d probably be better used getting fit or learning TypeScript.

I’m lucky I’ve had a bit of spare cash this year so I’ve been able to just spin up another instance to run another build for whatever project I’m working on this second, without putting too much thought into it. Unfortunately, the Australian dollar took a dive this year and most of the services I rely on are billed in US dollars. Renewals for things like my password manager, my VPS and my development tools came up and I found myself facing a bit of a shortfall.

Next year I’ll look into setting up a Patreon or something to cover the cost of the AWS instances I’ll be using for a lot of development as I’ve just turned off my DigitalOcean instance due to the falling Australian dollar.

But first I’ve got to do something worthy of being paid for – thus the cycle of the Never Ending Open Source To Do List starts again.

I hope that whatever job I find next year (hire me!) allows me to contribute to open source. I really do derive a lot of happiness from contributing to open source, but it’s a delicate balance between extracting a bit of a buzz from hitting the Fork button and getting enough sleep.

I don’t claim to have all the answers, and you should go read Ashe Dryden’s post on OSS and unpaid labor if you are also struggling with your open source contributions.

Here’s to knocking off a few more items on the List in 2016.

Malvertising, on my StackOverflow? It’s more likely than you think

Context / Link ads on StackOverflow

I’m pretty sure the capitalised, pop up generating links in the screenshot above aren’t covered in the StackExchange-backed AcceptableAds initiative.

Now, apart from snarky opening lines, I’m not blaming StackOverflow for this. After panicking that my newly flashed Nexus 5 was infected with some kind of malware, I fired up tPacketCapture to check what exactly was going on. Unfortunately, this uses the Android VPN API to capture traffic without root, and Android thinks it’s a good idea to send DNS over the unencrypted connection by default – so I couldn’t check for DNS poisoning or anything like that.

I’ve dropped the PCAP into Fiddler4 for ease of viewing. Can you see anything wrong here?

Dodgy traffic listing

As far as I can tell, a sketchy advertiser is on either ScorecardResearch or QuantServ. The wonderfully named hatredsmotorcyclist is serving some kind of obfuscated JavaScript related to DNSUnlocker which is a known malware provider, but not normally on Android. In a desktop browser, that javascript generates a whole lot of fake virus scanner popups which are sure to completely screw up your PC. I should probably run them in a VM at some point.

I can’t reproduce the link ads from the first screenshot, but I’ve posted a beautified version of the DNSUnlocker javascript as a Gist. I don’t recommend running it. I did – in the console of Chrome on GitHub, which means I’m partially protected by the Content Security Policies that GitHub sets.

You can see what the first script tried to load in the second file of that gist, but I’m sorry I couldn’t format it very well. You can see something called “re-markit” which I’m going to guess did the “marking” in the first screenshot.

Whether I was DNS poisoned or not I won’t know, but this is all the more reason to run an ad-blocker, and be very careful about what you let through – an ad network that’s benign today could be serving the latest 0-day tomorrow. I’m lucky that all I got was some crappy ads and a Play Store redirect.

A cleaned-up version of the Fiddler session archive is hosted here – malvertising.saz (sorry, I had to rename it for WordPress to allow the upload). I’d appreciate any help doing a more detailed analysis and reporting those strange domains to the registrars and the hosters. Oh, and if you see something like this on your Android, take a packet capture, then clear your browser cache. If you’re rooted, you may want to install AdBlock from the F-Droid store.

Like what I do? Support me here

Need a drink? Try a Whiskey Sweet n’ Sour from The Cinnamon Scrolls

Reimplementing apt-file, badly

I recently thought of a idea to write an apt-file (yum whatprovides) that works across multiple distros. If you’ve never used a utility like this, they’ll tell you which package provides a certain file. This would help when writing README files to work out what dependencies are needed in each distro.

After a couple of hours of fiddling around (read: procrastinating), I had a working import from the repository Contents.gz file that’s on every Debian mirror. The thing that struck me was the file sizes:

File Size
Contents-amd64.gz 26,721kb
Contents-amd64 378,971kb
Contents-amd64.sqlite (naive import with lots of duplication) 452,146kb
Contents-amd64.sqlite (de-normalised) 330,274kb
Contents-amd64.sqlite.gz* 55,858kb

Then I got side tracked…

Contents-amd64.gz is what’s downloaded when you run apt-file update. First off I basically ran a for each line in file, split it into package name and file name, then inserted it straight into SQLite.

I then remembered some introductory database theory and ‘de-normalised’ the data – each package becomes an integer and then each row only stores a reference to that integer.

The downside to this is that I needed to run two imports. Once to get the list of packages and files, and then again to link the packages to the list of files. In SQLite, this means creating copies of tables, dropping the duplicate one and then running VACUUM to reclaim the space. The result of this is that I (barely) beat the original file for storage efficiency.

Next, what about search? Could I really beat grep?

I think caching and load affected some of the results – a couple of runs a few hours apart produced wildly different results. All of my tests were run on a 512mb DigitalOcean instance.

$ time zgrep bash /var/cache/apt/apt-file/mirrors.digitalocean.com_debian_dists_jessie_main_Contents-amd64.gz | head
bin/bash                                                shells/bash
bin/bash-static                                         shells/bash-static
bin/rbash                                               shells/bash
etc/apparmor.d/abstractions/bash                        admin/apparmor
etc/bash.bashrc                                         shells/bash
etc/bash_completion                                     shells/bash-completion
etc/bash_completion.d/R                                 gnu-r/r-base-core
etc/bash_completion.d/_publican                         perl/publican
etc/bash_completion.d/aapt                              devel/aapt
etc/bash_completion.d/adb                               devel/android-tools-adb

real    0m0.013s
user    0m0.000s
sys     0m0.000s

$ time apt-file search bash | head
0install-core: /usr/share/bash-completion/completions/0install
0install-core: /usr/share/bash-completion/completions/0launch
aapt: /etc/bash_completion.d/aapt
acheck: /usr/share/doc/acheck/bash_completion
acl2-books: /usr/lib/acl2-6.5/books/misc/bash-bsd.o
acl2-books: /usr/lib/acl2-6.5/books/misc/bash.o
acl2-books: /usr/share/acl2-6.5/books/misc/bash-bsd.o
acl2-books: /usr/share/acl2-6.5/books/misc/bash.o
acl2-books-certs: /usr/share/acl2-6.5/books/misc/bash-bsd.cert
acl2-books-certs: /usr/share/acl2-6.5/books/misc/bash.cert

real    0m3.631s
user    0m3.268s
sys     0m0.280s
$ time python search.py bash | head
shells/bash: bin/rbash
admin/apparmor: etc/apparmor.d/abstractions/bash
shells/bash: etc/bash.bashrc
shells/bash-completion: etc/bash_completion
gnu-r/r-base-core: etc/bash_completion.d/R
perl/publican: etc/bash_completion.d/_publican
devel/aapt: etc/bash_completion.d/aapt
devel/android-tools-adb: etc/bash_completion.d/adb
science/libadios-bin: etc/bash_completion.d/adios
utils/silversearcher-ag: etc/bash_completion.d/ag

real    0m2.375s
user    0m1.732s
sys     0m0.540s

Not too bad, I’m competitive with apt-file. Now of course, I’m missing some features but it’s a good result for a little experiment.

The code is on my GitHub, if you’re interested.

I’ll keep working on this and add support for yum/dnf’s formats and probably migrate to a real database backend.

Sidenotes

  • The code initially worked, but then would crash when piped to head. The answer? You have to handle SIGPIPE on UNIX-like systems. See this StackOverflow post for more info

  • gzipping the database on a i5-based laptop took nearly 20 minutes. A more sane level of compression is a lot quicker, but the compressed size is about 64 megabytes

  • SQLite can read gzipped files with a proprietary extension. It’s a pity the sqlite3 Python module doesn’t accept file handles to database files, otherwise I could just wrap it in a gzipstream and I’d be good to go

  • Someone with better SQL skills could probably make the database import a lot faster. On my DigitalOcean instance, the initial import takes a few minutes.

Like what I do? Support me here

Need a drink? Try a Whiskey Sweet n’ Sour from The Cinnamon Scrolls