The Trouble With Passwords (Again)

Part of my efforts to grab my life by the corners and twist it into a different shape was a decision to switch my “primary” computer to be a laptop, rather than the ailing iMac. I’ve almost finished making that move, and have just a few things to move across from the old machine onto this laptop. So I sat down last night to recover some passwords and account information that I had been missing that I knew was in the Keychain on the old machine. And there the hassle began again.

It’s been pointed out, and I’ve ranted about it in the past in different forums, that the Mac OS X Keychain is a parson’s egg. It does a really good job of noting authorisation credentials for software running as the current logged in user, pretty well invisibly, silently and hassle free. Most software that needs authentication credentials has been written correctly to use the Keychain, and as long as nobody swipes both the keychain file and the master password, it’s reasonably secure.

Where the Keychain Access program falls down badly though is usability for a specific but pretty common use-case: being able to bulk-export credentials for import to a different keychain.

It’s not that Apple are unaware of this as a failing in the product, their support forums are littered with people asking how to do a bulk export, and the response is always the same – use the Migration Assistant to move the whole account from one machine to another. And there’s the fallacy in their design world view: Apple desig software with the belief there is a one-to-one relationship between a user and a user account on a single machine. For all their talk about cloud services, they still have this vision of a single user with a single user account instance publishing to the cloud. Bzzt. Wrong. It’s only loosely true for most users, and very wrong for the minority that for one reason or another have different accounts, potentially on different computers, for different uses and contexts.

The canonical and simple example is where I was a few months ago – a main desktop which was a document repository and work bench and media player, and a laptop which contained a subset of documents that were currently being worked on. And a computer at my work place with some internet connectivity, and a strict injunction against plugging private devices into the network. Oh, and the FrankenPuter Windows 7 box I built for games. Getting this to work, in general, was fairly straight forward – I used ChronoSynch to keep specific folders in synch, and Spanning Sync to keep calendars and addresses in synch between the two computers and Google. Using IMAP for Gmail kept mail sort of in synch, and Chrome’s facilities for synching bookmarks between instances via Google works ok.

But two things did not work at all well. There was no good way to keep two instances of Things in synch (but they are [working on that]), and absolutely no way to keep credentials and secure notes in synch (caveat, no way without committing to drinking the 1Pass kool-aid, which I may yet do).

I sat down on Monday night to finally get all the passwords out of the iMac keychain and onto the laptop somehow. Exercising Google-Fu, I found a pretty good AppleScript solution which did the trick, even if it had to deal with the annoyances of the Keychain. The trick was to unlock each keychain before running the script, then for each item in each keychain, as the script was running, click “Allow” on the two modal dialogs that Apple threw up. Somewhere over 300 clicks later, I had a text file with pretty well all I needed in it, and a firm decision to leave the data in a text file for reference, and not muck about trying to get it into the laptop keychain (See, I’m already thinking that 1Pass might be the better solution).

The next part of the puzzle was to get it onto the laptop. Now I’m slightly paranoid about things like this, and wanted to have at least a third copy while I got it across. Ok, it was late at night, and I wasn’t thinking straight. I’ve misplaced my last USB thumb drive (damn, need another), so decided to toss the file onto [DropBox] to aid in the transfer. Which led to the next issue: there was no way I would throw this file into the cloud without it being encrypted, and hard encrypted.

Ok, easy solution there – encrypt it with PGP. Done. Now to install PGP on the laptop… wait a minute, when did Symantec buy up PGP? And they want how much for a personal copy? (As an aside, for an example of entirely obfuscating costs and product options, the Symantec PGP subsite is a masterpiece). When it comes to companies I am loathe to entrust with protection of my secrets, Symantec is pretty high on the list. Ok, second plan, grab MacGPG. I’ve used earlier versions, and have used GPG and its variants on other platforms, and am confident in it. On the other hand, I really miss the point-and-click integration of MacPGP. Fortunately there’s a project under way to provide a point-and-click interface on top of the underlying command line tools, and I’m pretty happy with what they are doing. If you need it, go check out GPGTools, but be aware that you’ll probably need some of the beta versions of stuff – the stable release at the time of writing doesn’t provide an interface for decrypting files. The only thing I’m unhappy about is that it automagically decrypts files for me, without prompting for the pass phrase. So while it’s good for protecting the file in the cloud, it’s not so great for protecting the local copy (yes, I know that there’s little protection if someone swipes the laptop).

Which leaves me with the old hack – create an encrypted DMG with the file(s) in it. It’s a pretty straight forward process:

  1. Run Disk Utility
  2. select “New Image” and specify one of the encryption options. Other than the size and name, the rest of the options can be left as their default.
  3. copy the files into the new DMG
  4. there is no step 4

The only alarming gotcha is that it appears that you can decrypt the image without providing a credential, if you have allowed Disk Utility to store the pass phrase in your keychain. The trick is twofold – first, credentials are kept in a cache for a few minutes after use so that you usually don’t have to provide them in rapid succession. You can flush the cache by locking the keychain again. The second part is that by default the keychain remains unlocked after login. You can tweak these settings by going into the preferences for Keychain Access – I like to select “Show Status in Menu Bar”, and deselect “Keep login chain unlocked”.

All of which takes me off on a ramble from what I was thinking about. It seems to me like the battle to allow and encourage strong personal encryption and digital signing has been abandoned, and the focus has shifted purely to secure use of online services. There are a few personal file protection products on the market, of unknown and unverified strength, and a few more business focussed products. The intended widely available public key infrastructure for general public use never eventuated, subsumed instead by an industry focussed around providing certificates for Web sites and certificates for B2B secure communications.

Apple provides File Vault as a means to encrypt the entire disk, and there are similar products available for various versions of Windows, but the trouble remains that for encrypting a subset of files the software remains dodgy or highly technical. And don’t get me started on digital signatures on mail.

All in a twitter.

There has been some talk already regarding the use of Twitter as Brisbane sank beneath the waves. Unfortunately all the talk I’ve seen so far has limited itself to merely cheering that the service was marevelous (for example, some of the talk over at The Drum), without examining what worked and what did not.

As I tap away at this on the train, I note that John Birmingham has touched on the subject, and his comments are certainly accurate and pertinent. I definitely echo his thoughts on the essential uselessness of traditional broadcast media through all of this. The offerings from the free-to-air television services were worse than useless, and the commercial radio stations carried forward as if nothing was happening. I say “worse than useless” because all that I saw from the FTA television services was distorted, often inaccurate and out of date, and carried an air of desperately attempting to manufacture panic and crisis.

There was a particular gulf between the representations of what areas have been affected. If you watched any of the three commercial stations, you would gather that the only flood affected areas were Toowoomba, the Lockyer Valley, Rosalie, Milton and West End. If you watched the ABC you knew that Rocklea and Fairfield were trashed. If you monitored Twitter and other social media, you saw people on the ground with phones desperately broadcasting that areas like Fig Tree Pocket and Goodna were essentially destroyed, and can we please stop talking about the Three Monkeys Cafe in West End?

Of course, I no longer have any expectation that traditional broadcast media can be either informative or effective. And I include our apallingly bad newspaper of record here – the joke in Brisbane goes “Is that true, or did you read it in the Courier Mail?” Direct dealings with representatives of the broadcast and print media here over the last ten years or so have consistently emphasised that they will not travel more than a few kilometers from the center of town, and absolutely will not seek anything other than a single image or 15 second film grab that can be used as a sting. [refer channel 9 drinking game here].

What interested me most over the past week has been how various “official” Twitter voices have used the service. There were some marked and intriguing differences. Individual users definitely grok Twitter – a constellation of different #hashtags coalesced to one or two within about 24 hours, and the crowd mainly acted to filter out spam and emphasise important and useful information. There was a constant background hum of spam and attempted scams in the feed, but I noticed whenever an important message was submitted from one of several voices of authority (and a tip of the hat to John Birmingham here, he carries a lot of weight on line), the crowd spontaneously amplified the message and ensured it was being heard: the flow was usually from Twitter to other social services like Facebook and LiveJournal, and even back onto the comments pages on web sites for the traditional media outlets.

Three particular accounts interested me: the 612 ABC channel, the Queensland Police channel, and my bete noir, the TransLink SEQ channel. A parenthetical aside here as well, I use the word ‘channel’ in the sense of water (and information) flow, not in the sense of a TV or Radio channel.

Someone at 612 has understood Twitter right from the beginning, although it’s pretty obvious when their official operator is working, and not working, as the rate of messaging fluctuates wildly over the day. The bulk of their messages are snippets of information, or direct questions requesting feedback or information. Occasionally they will point off to their own website for further interaction, usually to pages used to gather information rather than distribute, and occasionally point off at other resources.

The QPS channel historically was of mixed quality, and their direction zig-zagged over the week before settling into a solid pattern: messages were well #hashtagged, important information was emphasised and repeated, messages about deeper background information held on other sites had sufficient summary information to allow the reader to tell whether they needed to go to the external site.

TransLink, by contrast, was an example of how not to use the service. There was every indication that they were explicitly refusing to respond to direct messages or any sort of feedback, and virtually all their messages contained no content and directed readers to their web site. Of course on Tuesday as the CBD was to all intents and purposes evacuated, the web site melted down, and it was unusable for much of the week. I will refrain from pointing out the flaws of their site, here and now, but may come back to it. The height of their lunacy on Tuesday was when many, many people were asking if the rumour that public transport was halting at 2PM was true, and the *only* response in return was to keep repeating that they had a page with service statuses on it.

Energex and the Main Roads department had similar problems with their websites failing under load, and in retrospect this is an argument for the QPS media unit using Facebook to distribute further information: the chance of failure of Facebook as a web destination is far lower.

The twitter stream from TransLinkSEQ is particularly interesting for the relative lack of information:

Through the morning, we had the following:

  • All CityCat & CityFerry suspended. Check the web for connecting buses. Leave extra time. More info http://www.translink.com.au
  • Due to heavy rain delays to some bus services, diversions on some routes. Check service status http://www.translink.com.au
  • Caboolture Line inbound and outbound services delayed up to 15mins due to signal fault. http://alturl.com/2thz8
  • Caboolture bus services majorly affected by flooding. http://alturl.com/b2brf
  • North Coast Line delays up to 40mins due to track/signal faults. Effects
  • Caboolture line, delays up to 15mins. http://alturl.com/y99ap
  • Rosewood-Ipswich train services suspended due to water on tracks at Rosewood. Buses arranged. http://alturl.com/c6yvq
  • All CityCat & CityFerry services expected to be out of action all day due to strong river currents –> http://twurl.nl/7bwxnl
  • Caboolture bus services cancelled. Visit http://translink.com.au for more.
  • All Kangaroo buses cancelled. Visit http://translink.com.au for more.

After about 12pm there were wide spread rumours – and a lot of direct questions were sent to TransLink about this – that public transport in the CBD was to be suspended at 2pm. This was what they broadcast in that period:

  • For more information on flood and weather affected services – http://twurl.nl/jct4cl
  • For information on the current status of flood affected services please refer to our website – http://twurl.nl/6z52j0
  • TransLink advises there are delays and disruptions on parts of the network. Services continue to run where possible.
  • Public Transport continues to run where possible – for latest disruption information see http://www.translink.com.au

At no point did they respond to the simple question “are services halting at 2pm”. The only rebuttal of that rumour came from the QPS Media service. After about 3pm they changed their message, and seemed to understand that people were understandably cranky:

  • Services are running throughout this afternoon. Expect delays & some cancellations. Check the website for service status info.
  • Our call centre is receiving a high number of calls, causing delays in answering. Check website for info to help us manage the call volume.
  • Trains are not operating to schedule this evening due to flooding. Services are still operating on all lines -> http://twurl.nl/z2i223
  • All train services at reduced frequency until further notice, some services have been suspended. Find out more –>http://twurl.nl/7c7esj
  • All train services suspended until 6am Wed 12 Jan. An hourly train timetable will then be in place, until further notice.

It’s no surprise that their website melted down after midday – note that virtually all their messages contained no useful information and just redirected to the website.
Successful use of Twitter as a meaningful and important information and communication tool recognised a handful of very key features of the service that distinguish it from many other services:

  • it is more like a broadcast service than an an asynchronous service like a web page;
  • messages should be considered ephemeral and only made meaningful by currency;
  • the tiny messages mean that it is accessible through an extremely broad range of mobile devices;
  • a very significant number of users use Twitter via mobile devices;
  • the infrastructure has evolved and been designed to support a staggeringly large number of simultaneous requests;
  • relevant information-rich messages are spread further and live longer than information-poor messages;
  • the service is inherently a two-way information flow, and questions and criticisms that flow back are indicators of errors or inadequacies in the outgoing flow;

I am hoping that organisations involved in this disaster take stock of how they used these services, and how these services can and should be used. The big lesson that can be learned here is that significantly more people have mobile phones with internet access than have battery powered radios.

Moving…

I’m back to updating this site, and will soon publish a link on the main site page to it. For the time being, I’m transferring pieces from Deviant Art, backdating entries to roughly correspond to the publication date. It will be interesting to see how long this takes, as I’m testing the use of ecto again, and it will take me a little time to adjust. Not to mention that I will halt every so often from boredom…