Time Errors

So I noticed that my comment on the NetBSD entry was dated in the future. So I checked the time zone in my WordPress settings, and it was still on UTC. However, after fixing that, the time was still wrong. So I looked at the time on my VM, which was off (I guess virtual machines don’t keep very good time when the uptime gets to be about a year or so.) So I tried to install ntpdate on my VM, only to find out that Debian has ended long term support for Debian 6 (squeeze-lts) as of this month.

So now I’ve got to update the OS on my virtual machine. Last time I tried that, it didn’t boot and I had to start over from scratch. This time I think I’m going to just install the new kernel, try to reboot, watch the Xen console and go back to the old kernel if it doesn’t boot. Also, I need to setup some kind of backup system for my VM anyway.

So this could be an interesting experiment. Also there may be some down time. (Although I hope not, ’cause it’s really inconvenient when my email server is offline.)

NetBSD

I decided to install NetBSD on my old laptop. I’ve been meaning to play around with the BSDs for a while, but only recently got around to doing so. Why NetBSD specifically? That’s a funny story, actually.

I was messing around with rump kernels and read that their device drivers come from NetBSD. This makes a lot of sense, because they’re the portability focused BSD. (Open does security, Dragonfly does performance, Free does something…) So I thought it might be instructive to mess about with the source.

The install is pretty easy as far as these things go. Setting up the network (on WiFi no less) was automatic, which is better than Linux at times (WiFi on that laptop has given me headaches, even though it’s a bog standard Atheros).

The install only had one hiccup which would’ve stumped a newbie. Since I had a GTP partion table with a protective MBR and GRUB installed, the auto partioner did something strange and the bootloader didn’t get installed properly. I was able to figure out how to use the bootloader on the install disk to boot into the system and fix it.

The system installed the BSD disklabel into the protective MBR partition (which started at sector 1), but the root partition therein started at sector 34 (after the GPT tables). Which meant that the MBR boot block (which never got installed, for some reason, perhaps they expect a workable DOS style one to already be there) couldn’t find the partition’s boot block.

After a few false starts, I was able to fdisk the MBR partition to start at 34. (I also changed it’s type to NetBSD, not that it mattered much.) Once I marked it as active and installed the MBR boot block, everything worked.

If I’d have applied my old procedure of zeroing the first K or so of a HD before a new install (adopted as a surefire fix for Windows installers that balked at anything the least bit unexpected), it probably would’ve gone fine. (I kinda want to test this theory.)

Anyway, once I got that fixed, I started to play around. The base system includes a fairly basic un*x setup, complete with X and build tools, but not much else. I enabled XDM during installation and while X worked out of the box, when I logged in, I was greeted with TWM. I haven’t use TWM since I tried running Gentoo for a while (which this kinda reminds me of). Also, there’s no browser (that I noticed) at all.

I installed Firefox binaries (which are about twenty versions out of date) and XFCE (moldiness unknown, I didn’t check) and things where much nicer. (Once I remembered which config file I had to hack to change my window manager. XDM apparently doesn’t understand the way modern desktop advertise themselves to display managers.) I’m thankful I decided to setup pkgin during installation.

It’s unfortunate that there’s not a handy Chromium build. There’s a work in progress port, but nothing quick and easy. So I decided to use pkgsrc to build a more recent Firefox before attempting to compile Chromium. Unfortunately pkgsrc isn’t smart enough to continue an interrupted download, so I don’t know if I’ll ever get the sources downloaded, never mind compiled.

If I’m going to be building a lot of packages for this box, I think I ought to figure out how to cross compile them on my desktop (which has way more MHz, cores, RAM, etc.).

🍪

The Itch

So, as I complained about on Facebook earlier today the EU Cookie Law (properly known as the Directive on Privacy and Electronic Communications) is stupid and annoying. It requires all cookie placing websites (which is basically all websites) to pester you about it. Chances are, once you say “ok, go away”, they track this by adding yet another cookie (or expanding the size of the cookie they would otherwise set).

“97% of websites use cookies – you may as well add a disclaimer that your website is using electricity.”
—Oliver Emberton, Silktide

Furthermore, the bad actors who abuse cookies (advertising trackers, cross site crackers, etc.) aren’t going to comply. Most of what they do is illegal or at least morally questionable. They’re organized criminal gangs operating out of countries that won’t extradite them. It’s silly.

So I’ve proposed a solution. Allowing knowledgeable web browser users such as myself to “opt in” to cookies. Most of us already have. We’ve read the hype about “evil cookies,” saw past the drama, realized most of the convenience expected from the modern web depends on cookies, and reacted appropriately. We block third party cookies, we allow first party cookies, and our ad blocker does the rest (blacklisting known bad actors).

Those that aren’t so knowledgeable use their browser defaults; which are the same exact settings (at least in a reputable browser). Why? Because that’s the reasonable setup. If I wanted to be annoyed by every site that wants to set a cookie, there’s already a browser setting for that. I’ve tried it. It’s annoying.

So in the interest of ending annoyance, I’ve decided to propose a mechanism for opting in to cookies. (I don’t think we really needed one (more correctly, we already had one), but the EU obviously has some stupid lawmakers). So this is a technical hack and a political protest all in one.

The Scratch

I propose an extended HTTP header be added to bypass all this silliness. I nominate the name “X-Cookies-Please” as being sufficiently succinct. (I resisted the urge to suggest something more snarky.) The content of the header is irrelevant; the presence of the header is enough to opt in. For example:

GET / HTTP/1.1
Host: ico.org.uk
Accept: text/html;...
User-Agent: Cookie Monster 1.0
Referer: https://blog.karatorian.org/
Cookie: ...
X-Cookies-Please: Yes you fools!

See, isn’t that better. I know this seems silly, but I am fairly serious. (Perhaps I should alter my tone. Or the content of the example? Nah.) I suppose I should talk to some browser developers and standards folks to get the ball rolling on this.

Tablets

So I forgot my tablet at a friend’s house last night and was too lazy to go get it today. This wasn’t really a big deal as I’ve got a perfectly functional desktop which can do anything the tablet can do (except a few walled gardens (like Snapchat) that I could conceivably run in an emulator if I so desired). Except one thing…

I can’t take it to bed with me, or on the couch, or wherever. I’ve realized what I really use it for: browsing the web in bed. Heck, I can’t even really lean back in the cheap recliner I use as a computer chair and still browse effectively.

So now I know anyway.

Apple and Encryption

If you listen to the news, you’ll be forgiven for thinking the recent Apple court order and the lawsuit they filed in response are about terrorism. It’s not.

It’s about encryption and the story starts long before San Bernardino. It starts with iOS 8. With the release of the eighth version of Cupertino’s mobile operating system, they added a significant new feature: proper key management for whole disk encryption. The system had disk encryption before, but since Apple’s servers had the keys, it wasn’t secure. Anyone with access to the servers, be it a government officials, disgruntled employee, hackers, foreign espionage agencies, etc. could decrypt anyone’s phone.

This is bad security practice. In the industry it’s what’s known as a “single point of failure.” Keeping millions of crypto keys on one server is a risk. Keeping said server (no matter how well firewalled) on the network is a ridiculous risk. Apple realized this, and fixed their policies.

So now, the crypto keys for an iOS device don’t leave that device. Which means that if you forget your access code, Apple can’t help you. The data is locked behind a layer of strong encryption which can’t be feasibility broken (given the current state of declassified encryption science). And this is a good thing; this is whole disk encryption working as designed.

So what is the case about and what has the FBI asked Apple to do? There is one hole in the system. The crypto key is secured with the phone’s passcode. This can be attacked. It’s a simple matter of brute force (trying every possible combination until one works). A four digit lock code only has 10,000 possibilities. While that sounds like a lot (especially if you’re entering them manually), with automation, it’s a simple matter to try them all. This is what the FBI wants to do.

However, to prevent people who steal phones from doing the same thing, the system is programmed to delete the encrypted data if more than ten failed attempts at the passcode are made. The court order asks Apple to alter this protocol. They propose that Apple author an OS update, sign it with their private key (which is required so that the phone will know Apple really released the update, not some random hackers) and upload it to the phone. Apple (rightly) refuses.

The FBI says this is a one time deal. Everyone else (Apple, Google, Facebook, me, hopefully you) knows this isn’t the case. If this technology exists, it will be used. The FBI will ask again, other agencies will ask, local police will ask, foreign governments will ask, eventually it will be leaked, hackers or foreign intelligence agencies or the NSA will steal it and then every iOS device will be insecure.

The government has periodically tried to insert backdoors into crypto systems with the argument that law enforcement needs the capabilities, that only the “good guys” will use them, and that the “law abiding citizen” has nothing to fear. This ignores the facts. It’s technologically impossible to give them what they want without compromising the system for everyone.

In short, the crypto works. The FBI cannot crack it. The NSA cannot crack it (or won’t admit in open court they can). Apple cannot crack it. Given the NSA’s history of warrantless wiretaps and intentional weakening of crypto systems in the past, this is as it should be.