“Salt Preserve Us”

 » February 18th, 2014

With the latest service breach / data theft making the rounds, discussion has turned again to password hashing, which inevitably leads to a discussion of hash salting. Before I go any further, though, I’m going to cop right now to ripping this post’s title off of a brilliant reddit comment by pellets.

Inevitably, though, somebody (or some company, for fuck’s sake Adobe) is going to take your trust for granted and fuck up your password right out into the open internet. As a result, it’s been standard practice for a while to advice that people don’t use the same password for multiple services. This limits the damage that a password leak can do to you, and it’s a great idea. But it gets us back to one of the age-old problems of computer security: strong and diverse passwords are harder to remember, and thus more likely to be written down (often in terribly conspicuous places).

For some, the solution to this is a service like 1Password, which provide a secure barracks to house your army of strong, unique passwords. That being said, these services aren’t for everyone. Perhaps you’re in a situation where you’re frequently manually entering passwords that can’t be saved. Maybe you don’t trust the keys to your kingdom to a single point of failure – after all, untrustworthy security practices by a third party are what got us into this mess to begin with.

If you find yourself in this situation, I’ve got something else to propose: pre-salt your own passwords. Start with a strong initial password, and employ a deterministic variation for each service you use the password with. Let’s start with a example. (This won’t be great, for reasons we’ll explore further down, but it’s illustrative nonetheless.)

Base password: gh23^^1kJa
Variation: prepend the first letter of the service to your password
Result: Gmail becomes ggh23^^1kJa; Hotmail becomes hgh23^^1kJa; Kickstarter becomes kgh23^^1kJa

Why isn’t this a great example? Well, a good scheme should have a few important qualities. Let’s pick apart the example above:

1. Low variation scheme guessability: In the above example, our variation scheme is potentially pretty easy to guess. If one of your passwords was compromised (say, your Kickstarter password), it’s possible that somebody could presume that you’re pre-salting, and try to guess at your scheme with some success. (Whether anybody would ever actually try something like this another question.)

2. Low collisions: Right now, all services that begin with the same letter will collide. Your Gmail password will be your Github password, for example, due to the same initial letter of both services.

So what to do? Be creative with your variation scheme. Use more than one letter. Use the second and third letters of the service name. Insert them in the middle of your password. Use the next letter in the alphabet. It’s reasonably unlikely that somebody will guess that you’re pre-salting and unwind something like gh2h3^^1kJa to figure out how you varied it for Gmail.

This is obviously no panacea to the various issues around password security, but if you find yourself in need of a set of unique, strong passwords and you want to store them all in your head, you could do worse than adding a little salt.

wherein I lose my mind analyzing spam

 » May 7th, 2013


Hey look, a dishonest whitepaper

 » April 26th, 2013

Dell commissioned Principled Technologies to put together a whitepaper on the cost of enterprise deployment for iPads vs. Dell’s Latitude 10 tablet running Windows 8. Bradly Chambers (h/t @gruber) takes apart Dell’s arguments around this pretty handsomly, but with claims like “85% cheaper to manage” (in favour of the Dell, natch), I thought the whitepaper could stand a closer look. Because here’s the thing: whitepapers sound like research, and they do their best to look like research, but they are not research. They’re not reviewed, they have no minimum standards for experimental procedure, reason or even honesty. Whitepapers, like the one we’re talking about here, are marketing literature dressed up in an ill-fitting scientist costume.

Spoiler: There is little that is Principled about Principled Technologies’ tablet whitepaper.

Throughout the report they’ve made every effort to handicap the iPad as much as possible. My guess is that the report is aimed at a predominantly iOS-unfamiliar tech audience; they’re banking on people not realizing how ludicrous their claims are.

Let’s look at some of the most dizzying conclusions (prices given per-device):

Deployment cost: Dell = $1.16; iPad = $19.47.

For the Dell they presumed that you’d be creating a basic system image, then pushing that out to your tablets. This is very reasonable. They helpfully assumed that you’d be able to to simultaneously image 10 devices at a time.

With the iPad, however, they conveniently skipped over Apple’s mass configuration utility for iOS devices. They instead timed out manually powering on each device, setting it up and then downloading apps, one by one, by hand. On each device. But don’t take my word for it: “We timed the manual steps of turning on the device, going through the initial system menus, and installing apps on the device. We chose a sample of popular enterprise and productivity apps.

Management computers: Dell = $0.00; iPad = $5.00

I’ll quote: “The Dell tablets need no additional management computers. We assume they use the ones already in place for managing notebooks and desktops.” Seems fair.

Let’s compare: “We include the cost of five computers to help with iPad deployment. These computers would have iTunes installed to assist in backups, restores, device syncs, and other iPad-related tasks.” I see. You know, if they’d at least included the Apple Configurator utility in the previous calculation, they could reasonably argue that they’d need at least one Mac. But they didn’t. I guess they needed a special iTunes-capable computer.

Printing: Dell = $0.00; iPad = $5.00

The Dells (theoretically) work with your (theoretical) preexisting printers. Awesome you guys!

Unfortunately, even though there are over 550 AirPrint-compatible printers (including six made by Dell), we regret to inform you that we have imagined that your office printers are not AirPrint-compatible. Don’t worry, though, you can just buy more computers! You can’t use the computers we bought in the last step, though. Those are for iTunes and these ones are for printing. It’s a common mistake.

I think we can tease out a pattern here.

What surprises me isn’t that this white paper came down in favour of Dell tablet ownership – this is a Dell-commissioned study, of course. I’m just amazed at how little effort was put into making this convincing. Battery replacement ($79 for the Dell, done on-site vs $110 for the iPad, shipped to Apple) is an example of a reasonable cost comparison that breaks in Dell’s favour. I figured that the paper’s authors would cherry pick a few of those, selectively ignore issues like dealing with malware (which I can tell you from my days in IT, is a major headache and a major cost), and call it a day. Maybe the iPad is more expensive to operate, but we certainly can’t tell that from this study.

On one hand, a study like this and the associated press-release froth coming out of Dell is little more than “BREAKING: Dell sez Dell the best.” I wonder about IT heads, and how many of them, unfamiliar with iOS and Apple, take a report like this at face value. But I wonder more about whether some of the suits at Dell have been smoking a bit too much of their own stash here. I wonder how much traction this report gets inside of Dell. Nilay Patel did a great roundup of some of Dell’s most eye-popping flops recently, and they all seem to point to a company that is deeply out of touch with what makes a great (or even good) product. If I was in charge at Dell, I’d sure as hell be evangelizing about how much better our products are than our competitors, but I’d be embarrassed to try to float a report as shoddy as this one.

If there are people at Dell reading this as the truth, they’re in worse shape than I’d imagined.

Gaining Weight for Fun and Profit: ARMV6 in Xcode 4.5

 » March 27th, 2013

With the launch of iOS6 and the corresponding Xcode 4.5 update, Apple quietly dropped support for producing binaries compatible with ARMV6 devices. This means that if you want to build an app that makes use of the new APIs introduced in iOS6, your app can’t also run on the original iPhone, the iPhone 3G or the first two generations of the iPod Touch. Developers can still use older versions of Xcode to produce binaries that will work on all devices, but this means losing support for APIs introduced in iOS6, including full 4″ screen support. Further, as of May 1st Apple will stop approving apps that don’t fill the iPhone 5′s screen.

I’m not going to bury this lede any further: it is entirely possible to build apps supporting all iOS hardware in Xcode 4.5+, and I’m going to explain how to do it.

I’m sure many developers wonder why this is even worth kerning vector letterforms over – who cares about a circa-2008 iPhone 3G? I don’t expect many of your sales to be coming from these older devices (which are still kicking around in the millions, mind you), but you may be in the same boat we’re in with Pano – we have a user-base numbering in the hundreds of thousands, stretching back nearly five years, and we don’t want to leave these users out in the cold. Further to that, the decisions we make about legacy support have far reaching impacts on how quickly older devices are forced into obsolescence, and I’d argue that we have a responsibility to appreciate the real cost of these devices. In our case, there were some social features we wanted to add to our app, and these depended heavily on iOS6 APIs. We’ve worked hard to fight target-version creep, and we don’t give up the ghost easily.

Some technical background: while all iOS devices (to date) run ARM processors, these processors have used three ARM architecture variants – ARMV6, then ARMV7 and now ARMV7s. While these architectures are backward compatible – your ARMV7s iPhone 5 can run an older ARMV6 binary – your iPhone 3G can’t run a binary that only contains ARMV7s machine code. Apple has experience transitioning between different processor architectures on their desktop machines, first from 68k to PowerPC, then from PowerPC to Intel. The solution to this architecture-compatibility problem is to produce a fat binary; that is, a single binary that contains multiple compiled versions of your code, one each for the architectures your application will support. On OS X and iOS, this is accomplished using a tool called lipo that slices up binaries and performs a bunch of apt-surgical-metaphor-goes-here functions on them: stripping architectures out and stitching new architectures in.

For most developers, this happens unseen and under the hood during Xcode’s build process. Prior to the iPhone 5, modern versions of Xcode were using lipo to produce fat iOS binaries that natively supported ARMV6 and ARMV7. Starting in Xcode 4.5, this shifted to ARMV7 and ARMV7s, and Xcode dropped support for producing ARMV6 machine code altogether.

The solution to this problem is pretty simple – conceptually, at least; your Xcode build process needs to produce an ARMV6 image for your application, then stitch it into the application binary, producing a fat binary that includes machine code for ARMV6, ARMV7 and ARMV7s.

In practice, there are two parts to this process. The first is to produce an application that will run appropriately on all of the devices and API levels you plan to support. I’m not going to go into great detail here, as it’s an extension of what we’ve been doing for years to conditionally support APIs where they’re available. Xcode is still remarkably aloof when it comes to API awareness, so you’ll be well served by something like Ivan Vasic’s venerable Deploymate, which is a development tool expressly designed to check your API usage against your API target level and highlight potential problems. You’ll also want to #ifdef out any code that is iOS6 dependent – something like #ifndef ARMV6_ONLY and #endif should surround any code blocks that the legacy Xcode compiler would choke on (remember, it knows nothing about iOS6). In my experience, weak-linking frameworks that were introduced in iOS6 will still cause some headaches when they’re part of your project, so we’ll keep Xcode in the dark here. Remove iOS6-exclusive frameworks (Social.framework, I’m looking at you) from the Linked Frameworks and Libraries section of your target’s summary pane and instead add them to your Other Linker Flags build setting in the form “-framework <framework-name>”; in my case that looked like this:

Manually weak-linking frameworks

The second part of this process is where the dance really begins: we have to compile that extra binary image and stitch it into our app bundle. The earliest incarnations where people got this working involved jumping back and forth between two versions of Xcode, but I’m happy to report that we can automate things and have it all run from within Xcode 4.5+. ARMV6 support has been dropped entirely from within the build chain in these newer Xcode versions, however, so we’ll still need to install an older version of Xcode beside our current install. I’d suggest using Xcode 4.4.1 – this is the last release to support ARMV6 and it’s downloadable from Apple’s developer site.

You’ll want to duplicate your build configuration (in your Project’s Info pane) to create an ARMV6-specific one; if you’re feeling creative you might call it Release-armv6. In your target’s build settings, you’ll want to set both the Architectures and  Valid Architectures for your ARMV6 build configuration to “armv6″, while the rest should be set to armv7 armv7s.

In order for the legacy version of Xcode to skip over your ifndef’ed code blocks, you’ll need to also add a compiler flag to Other C++ Flags and Other C Flags in the same build settings pane; assuming your preprocessor blocks are looking for the ARMV6_ONLY token, your flag will be -DARMV6_ONLY

Your cupcakes should be beginning to rise, and they should look something like this:

Build settings

Next we’re going to add a run script build phase to our target that uses the c shell (shell: /bin/csh). Our run script will do the following: (1) compile an ARMV6 version of our binary using our ARMV6 build configuration; (2) modify the app’s plist to lower the minimum iOS version (Xcode 4.5+ won’t write out a version below 4.3), then; (3) verify that our final binary includes our desired architectures.

I’ve adapted my run script from one posted here; mine is posted as a gist here, and it includes additional code to do basic sanity checking on your build – following the lipo process, we check to verify that all three desired architectures are present in the binary and we fail the build if they aren’t. You’ll want to ensure that you’ve configured your build appropriately for your own setup; the build script produces a useful log file that may be helpful if you need to debug a failed build.

I want to note here that Xcode uses its own version of lipo which is distinct from the binary that ships with OS X – you will will want the fully qualified path to Xcode’s own version if you’d like to verify your binary’s architectures yourself; that path is:

Further, be aware that your build script will wipe Xcode’s $ARCHIVE_PATH environment variable, meaning that archive builds will fail to automatically archive. The .xcarchive bundle will, however, be generated, meaning that you can manually archive these files yourself. Xcode’s archive process is quite opaque, but I’ve determined that the archive list in the Xcode Organizer is populated by scanning for a special Info.plist file inside of your .xcarchive bundle. This plist file won’t be generated during our build process, so I’ve written a script that you may want to add as a post-action to your scheme’s Archive settings. My script (reproduced here) grabs a template Info.plist (I’d suggest snagging one from a previous application archive), then customizes it for your archived build and places it into your .xcarchive bundle.

… and there you have it – triple-architecture fat binaries, hot and fresh out of Xcode 4.5.

– I’d like to specifically acknowledge a whole bunch of Stackoverflow contributors, including Mike and Jerome, for their work, which I built on for the above post. Also, thanks to Justin Williams for providing a minimally-helpful-but-technically-correct suggestion when I was trying to figure out how the Xcode Organizer searches for .xcarchive bundles.

Trust No One: Google, Apple and Failure

 » March 22nd, 2013

The Google Graveyard  is making the rounds again, as it often does when a venerated Google product gets the axe.

I’m none-too-happy about Google shutting down Reader, and I may have shown up at the cemetery expecting to be enraged about all of the great products that Google has killed over the years. The more I looked around, though, the more I realized that these graves may be showing us what a company needs to do to compete in web services as Google does – all the product killing is just an irritating byproduct of something much more important: failure. Google produces some terrific products; from the looks of it, part of that process includes taking a risk on things and a real willingness to fail.

This doesn’t mean that axing Reader was a smart move – it feels cynical and it certainly doesn’t square with the relaxed-and-helpful-friend persona Google tries to project. It is, however, a fascinating angle when you look at Apple’s recent inability to really pull off something great service-wise.

Google, it seems, is launching a new product every day. Some of them take off, others don’t, and – this is the most important part – Google doesn’t set themselves up to be mortally wounded when some of their products fizzle. Some of their more high-profile launches have certainly left them open to an internet’s worth of mockery, but excepting perhaps Google+’s inability to gain real traction, few observers fret about what these failures mean for Google’s long term prospects.

Now let’s compare this with the way Apple launches their services: Siri, Maps, Ping, iCloud – these were all major announcements and Apple bet heavily on each them (maybe less so on Ping). The failure of Maps has cost Apple dearly. They launch fewer services and expose themselves to significantly more damage if these services fail. There are a number of reasons for this, but not the least of which is that Apple is very selective about what they devote resources to; they simply do fewer discrete things.

Despite all of the ways in which they compete, Google and Apple are very different companies and they make their money in very different ways. Search is still almost certainly Google’s golden goose. They seem careful to risk failure in products that may complement search but wouldn’t weaken it if they fail. Apple’s history is in hardware, and it’s also where most of their money comes from – it is reasonable that a hardware company would be adverse to a scattershot approach.

Apple’s problem arises if their future depends significantly on their service offerings as well. If that’s the case, they may need to come up with a way of failing more gracefully. This feels antithetical to what Apple is supposed to be all about, but it may also be their best shot. Apple has been compared to a jilted lover, unwilling to be dependent on other companies; self-reliant and unable to truly trust anyone else. A services-oriented Apple may need to hedge further and add themselves to the list of entities they can’t completely trust.