Geronimo!

Ok. More adventures with open source. One of the things I’ve got on my (lengthy) list at the moment is to have a look at some light(er) weight servlet and J2EE containers. JBoss is giving me hives. You may be aware of that, I’ve mentioned it before.

So the first one I wanted to look at was Geronimo, partly because it’s from Apache, party because it’s got the option of being wrapped around either Jetty or Tomcat. I trotted off, grabbed the 2.2.1 tar ball and threw it onto my Mac so that I could run it up on the train. That’s where the irritation started. From the documentation, it was evident that in other *nix environments, I should just be able to unpack the tar ball and run bin/geronimo.sh run. I tried that on my Mac, and was hit by the dreaded Unable To Decrypt error.

It was pretty obvious that there were two parts to the solution: get a full JDK installed on the Mac, and ensure that the run time environment for Geronimo has JAVA_HOME pointing to the right place.

I could rant endlessly about Apple’s arcane treatment of Java, but won’t. If you have a developer account, it’s reasonably easy to grab a fairly recent JDK and get it installed. What’s not so obvious is where the hell the JDK winds up on your machine after install. Apple aren’t particularly helpful with this: the JDK winds up in /Library/Java/JavaVirtualMachines. Hence for me I needed to add JAVA_HOME to my profile:


JAVA_HOME=/Library/Java/JavaVirtualMachines/1.6.0_31-b04-415.jdk/Contents/Home

That sorted Geronimo out nicely – the boot log showed the right JARs were being found:


Runtime Information:
Install Directory = /Users/robert/Desktop/geronimo-jetty7-javaee5-2.2.1
Sun JVM 1.6.0_31
JVM in use = Sun JVM 1.6.0_31
Java Information:
System property [java.runtime.name] = Java(TM) SE Runtime Environment
System property [java.runtime.version] = 1.6.0_31-b04-415-11M3646
System property [os.name] = Mac OS X
System property [os.version] = 10.7.3
System property [sun.os.patch.level] = unknown
System property [os.arch] = x86_64
System property [java.class.version] = 50.0
System property [locale] = en_US
System property [unicode.encoding] = UnicodeLittle
System property [file.encoding] = MacRoman
System property [java.vm.name] = Java HotSpot(TM) 64-Bit Server VM
System property [java.vm.vendor] = Apple Inc.
System property [java.vm.version] = 20.6-b01-415
System property [java.vm.info] = mixed mode
System property [java.home] = /Library/Java/JavaVirtualMachines/1.6.0_31-b04-415.jdk/Contents/Home
System property [java.classpath] = null
System property [java.library.path] = {stuff in java.home}

Nice! Trouble is – the problem persisted. Here’s the trick: the initial install of Geronimo creates various properties files. If that initial startup fails, the properties files have borken information in them, and you will never get it to startup. Let me repeat that:

You MUST get JAVA_HOME in the Geronimo environment pointing to a valid JDK before you try to run Geronimo, or daemons will fly out of your nose and eat your face. You have been warned.

Sharpening the tools

Before I packed it all up, I had a workshop habit that I suspect other makers of sawdust shared. Before embarking on any work, I would spend some time cleaning up the workshop area. I would make sure the bench was clean and clear, sweep the floor, check that tools were sharp and sharpen them if necessary. I’d check the tables on the big tools for rust, and the tracking on the bandsaw. Sometimes I would get out particular tools and lay them out on the bench, ready to go.

And all the while I was doing this, I would be thinking about the work I was going to engage on, think about the processes I was going to follow, the pattern of work. I find this enormously relaxing, and a fantastic way to focus. I found that I would be trimming away, sweeping away, everything in my head that I didn’t need for the job at hand.

Over the past few years I’ve been trying to take the same approach to programming work, with a similar resultant focus (and one day I hope that I find it relaxing too). Thus, today, I’m re-reading Better Builds with Maven. Sharpening the tools and cleaning the bench.

One nice thing about re-reading books like this, particularly well written ones, is that there is always something to learn, some nuance that pops out that was previously invisible, highlighted by fresh experience. Even the import of a simple statement like “convention over configuration” can change over time.

Just like buses

You wait forever, and then two turn up at the same time.

Which is what happened with job offers on Friday. In a couple of days time, when contracts have been signed, I’ll tell you which two companies, and why I chose one over the other, but suffice it to say that I found myself in the remarkable position of having two really good offers come in within a couple of hours of each other.

I’ve dealt with a lot of agencies and agents while I’ve been hunting for work in London. Most of them have been ok, a few of them have felt incredibly dodgy, and three of them proved to be very good. Maybe I’m biased since these were the ones that got me the best chances, but it did feel that these three companies seriously thought about my reported history and interests, and made intelligent and dedicated attempts to match that against client needs. So, some free advertising for them. If you’re looking for work here in London, I strongly suggest you talk to these guys and gals:

  • ABRS went out of their way to put interesting things in front of me;
  • Salt have a good focus on marketing candidates and helping candidates market themselves
  • Bearing have an excellent understanding of the market sectors they aim to service, and a good understanding of technology

I’d also like to throw some laurels in the direction of BITE Consulting. They’re a small shop, with a fairly specific aim – placing people into contract roles, generally folk sourced from the colonies – but the products they offer to contractors are extremely competitive and sensible. If I’d been pursuing contract work (which I would have turned to if these permanent positions hadn’t popped up), there is zero doubt in my mind that I would have worked through and with BITE.

We now return you to regular programming.

Agility

A too frequent question over the past couple of weeks has been “what characterises Agile development for you?”, or some variant on that. Thinking about it this evening, I am struck by how much stuff has accumulated around what is a very simple, elegant manifesto. Go look at the WikiPedia article to see how much has been built around the idea.

In particular, the attitude I keep confronting is, loosely, “Agile = SCRUM”. A few people extend that to “Agile = SCRUM + CI”. More commonly I’m seeing the equation “Agile = SCRUM + Testing”, which is interesting, because what people are really articulating is “Agile = XP + some vague notions of short release cycles”. I really think that in a lot of places, Agile has become as much of a vast over-engineered framework as any older methodology. On the other hand, I am a curmudgeon.

It’s worth restating the agile methodology though, ripped straight from the site:

We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

That is, while there is value in the items on the right, we value the items on the left more.

Words to live by, or at least to work by.

Mirror World…

Someone, probably William Gibson speaking through Cayce Pollard, wrote about the Mirror World of travel. The notion that it is not the big differences that lend to a sense of unease, but the myriad small and barely noticeable differences. Like the shape of electrical outlets.

An odd one for us, that we’re still getting used to. In the UK they drive on the left. But expect people standing on the escalators to be standing on the right. On the other hand there’s a general convention to go up and down the left side of stairs. And it’s ok to take dogs on the escalators, as long as you carry them.

But the one that is driving me absolutely mad, as I’m in and out of job interviews that involve coding tests performed at a computer is that the standard over here is for the “@” to not be Shift-2, but instead to be over where I’m used to finding the double quote, i.e. one right-ward twitch of my little finger. For no readily apparent reason, the two are reversed. Which makes touch-typing code very annoying.

Scope Creep

One thing that I’ve noticed over a quarter of a century of banging out code is how the expectations for what a coder will know has expanded enormously.

When I began, the expectation was that you had an understanding of how computers roughly worked – the old CPU plus memory plus storage model that we’ve had since the beginning – and facility in one or two languages. Cobol, Pascal, Basic, Fortran, some assembler. It was anticipated that you’d be able to sit down with a manual and a compiler, and teach yourself a new language in a few days. The important part was knowing how to think, and how to really look at the problem. And of course, how to squeeze every last cycle out of the CPU, and do amazing things in a small memory footprint.

Around the turn of the century, there was not much change. Your average coder was expected to be comfortable working in a three-tier architecture, to have some vague idea about how networks and the internet worked, be comfortable with SQL and a database or two, to have some notion of how to work collaboratively in a multi-discipline team. And of course, to have a deep understanding of a single language, and whatever the flavour-of-the-month framework or standard libraries existed. UML and RUP were in vogue, but Agile was newfangled, and here was a wall of design documentation to ignore.

Now is the age of the ultra-specialist. You need to be server side, or middleware, or client side. You need to know a language intimately, and be vastly knowledgable about half a dozen ancillary technologies – in the Java world, for instance, you need to grok Spring, and JMS, and JMX, and Hibernate, and Maven, and a CI tool, and a specific IDE. You need to understand crypto, and security, and enterprise integration and architectural patterns, and networking.

I fear that this rant has gone vague and off the rails. There is a strange paradox in place now: we are expected to specialise deeply in the problem spaces we address, but carry in our heads a hugely expanded toolset.

Making a Mockery

As I’m back on the hunt for a job, I’m going back and brushing up on technologies I’ve not necessarily used for a while, out of interest as much as anything else. This has been enlivened somewhat by realising I didn’t have and IDEs or any other coding tools – other than Xcode and TextWrangler – on my laptop. The effort of getting things set back up so that I can play has reminded me of why I love, and why I loathe, the OpenSource Java community.

I’ll write up some notes on bits and pieces that I want to remember as static pages elsewhere, so for the moment let me just mutter about what I got running, and what has driven me nuts.

To start with, it pleases me no end that Mac OS-X comes natively supplied with Maven. That gives me some base assurance that I can, at a minimum, launch a terminal and build and test without having to download the world first (other than Maven’s habit of downloading the world, of course).

Next up, I grabbed down Netbeans 7.1. I’d not used it with Maven previously, and was pleased to discover that the IDE plays nicely the Maven way, rather than desperately wanting to make Maven work the IDE way.

Penultimately, I got an Eclipse running – the SpringSource Indigo bundle – and began to grit my teeth. Don’t get me wrong. I like Eclipse a lot. But for the last eight months I’d been using IntelliJ, which means my brain and fingers had been retrained to different shortcuts, and trying to switch back to Eclipse is like trying to remember a language you’ve not spoken since high school. The other thing which always makes me grit my teeth is the richness of Eclipse. It’s the EMACS of IDEs, infinitely variable and configurable, and trying to get a fresh download to look and feel like you are used to is painful, annoying and tedious.

And finally, I sat down to fiddle with JMock again. And spent some time tearing my hair out. I was reading through a tutorial from the JMock site, and was damned if I could get it compiling. It was pretty obvious why – the JMock objects I was expecting weren’t in the JMock JARs. Maybe it was too late at night for me to be thinking straight when I downed tools, as picking it up again this morning revealed my problem: the tutorial was referring to a slightly older version of JMock, even though it was in a JUnit 4 context, and the current version is radically different.

Therein lies one of the things that drives me absolutely nuts about the open source Java Community: too many major, key, central projects have inaccurate, out-of-date documentation, and too much key knowledge is passed around in folklore.

Only if my hair is on fire.

I think I need to educate, or re-educate, my cow-orkers to understand what it means when I put on headphones while working. And there are one or two that I really need to tell that I cannot hear them if they come up behind me and speak softly to attract my attention. On the other hand, most of the reason is that I have run out of attention to spare.

There are certain classes of IT problems that end up occupying my entire consciousness and are extremely difficult to let go of when I walk out the door, particularly if they take several days to resolve. Maybe physicists and philosophers have better mental work benches, and can put the work down to re-emerge from their deep congnitive dives without the bends. I can’t.

If the nature of the problem is both time-bound and space-bound, I need to disappear inside my own head. What I mean is when the symptoms of the problem and the behaviour of possible contributors is spread across human-scale rather than machine-scale time, and where more than one thread of operation is in play, where computation is smeared across the possibility space.

I really have no perfect tool for disecting these sorts of problems. My workbench is scattered with a variety of tools for working on different parts of the problem. If you looked over my shoulder you would usually see that I have a text file open called “notes” or “defect xyz”, which is a mix of apparently context-free reminders to myself and a scantily sketched monologue as I propose and reject different theories. You would usually see a paper notepad with faint pencil scribbles, and a variety of abstract diagrams, mostly scratched out. I would probably have an IDE open with code highlighted, and a terminal window showing logs. What you cannot see is what’s in my head: elaborate mental models of what I believe to be the space-like computational state smeared across the problem time. The visible symbols are just reminders, annotations, histories of abandoned models.

There are two implications of this. First, I can’t put it down when I go home, or to eat, or to sleep. A sufficiently complex set of models will take up all my thoughts, there’s just no room in my head for any other sensible responses or rational thoughts. I become a dreamwalking zombie. Second, and possibly most pertinently: if you ask me to take my headphones off and pay attention to you, there’s a very high probability that the mental model currently being constructed will collapse, and I have to start from the beginning again. Your five minute interruption will probably blow an hour or more’s work.

So please. If I’ve got my headphones on, please, please don’t ask me to emerge from my fugue state even if the room is on fire. Only if it has spread far enough that my hair is burning.

The Trouble With Passwords (Again)

Part of my efforts to grab my life by the corners and twist it into a different shape was a decision to switch my “primary” computer to be a laptop, rather than the ailing iMac. I’ve almost finished making that move, and have just a few things to move across from the old machine onto this laptop. So I sat down last night to recover some passwords and account information that I had been missing that I knew was in the Keychain on the old machine. And there the hassle began again.

It’s been pointed out, and I’ve ranted about it in the past in different forums, that the Mac OS X Keychain is a parson’s egg. It does a really good job of noting authorisation credentials for software running as the current logged in user, pretty well invisibly, silently and hassle free. Most software that needs authentication credentials has been written correctly to use the Keychain, and as long as nobody swipes both the keychain file and the master password, it’s reasonably secure.

Where the Keychain Access program falls down badly though is usability for a specific but pretty common use-case: being able to bulk-export credentials for import to a different keychain.

It’s not that Apple are unaware of this as a failing in the product, their support forums are littered with people asking how to do a bulk export, and the response is always the same – use the Migration Assistant to move the whole account from one machine to another. And there’s the fallacy in their design world view: Apple desig software with the belief there is a one-to-one relationship between a user and a user account on a single machine. For all their talk about cloud services, they still have this vision of a single user with a single user account instance publishing to the cloud. Bzzt. Wrong. It’s only loosely true for most users, and very wrong for the minority that for one reason or another have different accounts, potentially on different computers, for different uses and contexts.

The canonical and simple example is where I was a few months ago – a main desktop which was a document repository and work bench and media player, and a laptop which contained a subset of documents that were currently being worked on. And a computer at my work place with some internet connectivity, and a strict injunction against plugging private devices into the network. Oh, and the FrankenPuter Windows 7 box I built for games. Getting this to work, in general, was fairly straight forward – I used ChronoSynch to keep specific folders in synch, and Spanning Sync to keep calendars and addresses in synch between the two computers and Google. Using IMAP for Gmail kept mail sort of in synch, and Chrome’s facilities for synching bookmarks between instances via Google works ok.

But two things did not work at all well. There was no good way to keep two instances of Things in synch (but they are [working on that]), and absolutely no way to keep credentials and secure notes in synch (caveat, no way without committing to drinking the 1Pass kool-aid, which I may yet do).

I sat down on Monday night to finally get all the passwords out of the iMac keychain and onto the laptop somehow. Exercising Google-Fu, I found a pretty good AppleScript solution which did the trick, even if it had to deal with the annoyances of the Keychain. The trick was to unlock each keychain before running the script, then for each item in each keychain, as the script was running, click “Allow” on the two modal dialogs that Apple threw up. Somewhere over 300 clicks later, I had a text file with pretty well all I needed in it, and a firm decision to leave the data in a text file for reference, and not muck about trying to get it into the laptop keychain (See, I’m already thinking that 1Pass might be the better solution).

The next part of the puzzle was to get it onto the laptop. Now I’m slightly paranoid about things like this, and wanted to have at least a third copy while I got it across. Ok, it was late at night, and I wasn’t thinking straight. I’ve misplaced my last USB thumb drive (damn, need another), so decided to toss the file onto [DropBox] to aid in the transfer. Which led to the next issue: there was no way I would throw this file into the cloud without it being encrypted, and hard encrypted.

Ok, easy solution there – encrypt it with PGP. Done. Now to install PGP on the laptop… wait a minute, when did Symantec buy up PGP? And they want how much for a personal copy? (As an aside, for an example of entirely obfuscating costs and product options, the Symantec PGP subsite is a masterpiece). When it comes to companies I am loathe to entrust with protection of my secrets, Symantec is pretty high on the list. Ok, second plan, grab MacGPG. I’ve used earlier versions, and have used GPG and its variants on other platforms, and am confident in it. On the other hand, I really miss the point-and-click integration of MacPGP. Fortunately there’s a project under way to provide a point-and-click interface on top of the underlying command line tools, and I’m pretty happy with what they are doing. If you need it, go check out GPGTools, but be aware that you’ll probably need some of the beta versions of stuff – the stable release at the time of writing doesn’t provide an interface for decrypting files. The only thing I’m unhappy about is that it automagically decrypts files for me, without prompting for the pass phrase. So while it’s good for protecting the file in the cloud, it’s not so great for protecting the local copy (yes, I know that there’s little protection if someone swipes the laptop).

Which leaves me with the old hack – create an encrypted DMG with the file(s) in it. It’s a pretty straight forward process:

  1. Run Disk Utility
  2. select “New Image” and specify one of the encryption options. Other than the size and name, the rest of the options can be left as their default.
  3. copy the files into the new DMG
  4. there is no step 4

The only alarming gotcha is that it appears that you can decrypt the image without providing a credential, if you have allowed Disk Utility to store the pass phrase in your keychain. The trick is twofold – first, credentials are kept in a cache for a few minutes after use so that you usually don’t have to provide them in rapid succession. You can flush the cache by locking the keychain again. The second part is that by default the keychain remains unlocked after login. You can tweak these settings by going into the preferences for Keychain Access – I like to select “Show Status in Menu Bar”, and deselect “Keep login chain unlocked”.

All of which takes me off on a ramble from what I was thinking about. It seems to me like the battle to allow and encourage strong personal encryption and digital signing has been abandoned, and the focus has shifted purely to secure use of online services. There are a few personal file protection products on the market, of unknown and unverified strength, and a few more business focussed products. The intended widely available public key infrastructure for general public use never eventuated, subsumed instead by an industry focussed around providing certificates for Web sites and certificates for B2B secure communications.

Apple provides File Vault as a means to encrypt the entire disk, and there are similar products available for various versions of Windows, but the trouble remains that for encrypting a subset of files the software remains dodgy or highly technical. And don’t get me started on digital signatures on mail.

All in a twitter.

There has been some talk already regarding the use of Twitter as Brisbane sank beneath the waves. Unfortunately all the talk I’ve seen so far has limited itself to merely cheering that the service was marevelous (for example, some of the talk over at The Drum), without examining what worked and what did not.

As I tap away at this on the train, I note that John Birmingham has touched on the subject, and his comments are certainly accurate and pertinent. I definitely echo his thoughts on the essential uselessness of traditional broadcast media through all of this. The offerings from the free-to-air television services were worse than useless, and the commercial radio stations carried forward as if nothing was happening. I say “worse than useless” because all that I saw from the FTA television services was distorted, often inaccurate and out of date, and carried an air of desperately attempting to manufacture panic and crisis.

There was a particular gulf between the representations of what areas have been affected. If you watched any of the three commercial stations, you would gather that the only flood affected areas were Toowoomba, the Lockyer Valley, Rosalie, Milton and West End. If you watched the ABC you knew that Rocklea and Fairfield were trashed. If you monitored Twitter and other social media, you saw people on the ground with phones desperately broadcasting that areas like Fig Tree Pocket and Goodna were essentially destroyed, and can we please stop talking about the Three Monkeys Cafe in West End?

Of course, I no longer have any expectation that traditional broadcast media can be either informative or effective. And I include our apallingly bad newspaper of record here – the joke in Brisbane goes “Is that true, or did you read it in the Courier Mail?” Direct dealings with representatives of the broadcast and print media here over the last ten years or so have consistently emphasised that they will not travel more than a few kilometers from the center of town, and absolutely will not seek anything other than a single image or 15 second film grab that can be used as a sting. [refer channel 9 drinking game here].

What interested me most over the past week has been how various “official” Twitter voices have used the service. There were some marked and intriguing differences. Individual users definitely grok Twitter – a constellation of different #hashtags coalesced to one or two within about 24 hours, and the crowd mainly acted to filter out spam and emphasise important and useful information. There was a constant background hum of spam and attempted scams in the feed, but I noticed whenever an important message was submitted from one of several voices of authority (and a tip of the hat to John Birmingham here, he carries a lot of weight on line), the crowd spontaneously amplified the message and ensured it was being heard: the flow was usually from Twitter to other social services like Facebook and LiveJournal, and even back onto the comments pages on web sites for the traditional media outlets.

Three particular accounts interested me: the 612 ABC channel, the Queensland Police channel, and my bete noir, the TransLink SEQ channel. A parenthetical aside here as well, I use the word ‘channel’ in the sense of water (and information) flow, not in the sense of a TV or Radio channel.

Someone at 612 has understood Twitter right from the beginning, although it’s pretty obvious when their official operator is working, and not working, as the rate of messaging fluctuates wildly over the day. The bulk of their messages are snippets of information, or direct questions requesting feedback or information. Occasionally they will point off to their own website for further interaction, usually to pages used to gather information rather than distribute, and occasionally point off at other resources.

The QPS channel historically was of mixed quality, and their direction zig-zagged over the week before settling into a solid pattern: messages were well #hashtagged, important information was emphasised and repeated, messages about deeper background information held on other sites had sufficient summary information to allow the reader to tell whether they needed to go to the external site.

TransLink, by contrast, was an example of how not to use the service. There was every indication that they were explicitly refusing to respond to direct messages or any sort of feedback, and virtually all their messages contained no content and directed readers to their web site. Of course on Tuesday as the CBD was to all intents and purposes evacuated, the web site melted down, and it was unusable for much of the week. I will refrain from pointing out the flaws of their site, here and now, but may come back to it. The height of their lunacy on Tuesday was when many, many people were asking if the rumour that public transport was halting at 2PM was true, and the *only* response in return was to keep repeating that they had a page with service statuses on it.

Energex and the Main Roads department had similar problems with their websites failing under load, and in retrospect this is an argument for the QPS media unit using Facebook to distribute further information: the chance of failure of Facebook as a web destination is far lower.

The twitter stream from TransLinkSEQ is particularly interesting for the relative lack of information:

Through the morning, we had the following:

  • All CityCat & CityFerry suspended. Check the web for connecting buses. Leave extra time. More info http://www.translink.com.au
  • Due to heavy rain delays to some bus services, diversions on some routes. Check service status http://www.translink.com.au
  • Caboolture Line inbound and outbound services delayed up to 15mins due to signal fault. http://alturl.com/2thz8
  • Caboolture bus services majorly affected by flooding. http://alturl.com/b2brf
  • North Coast Line delays up to 40mins due to track/signal faults. Effects
  • Caboolture line, delays up to 15mins. http://alturl.com/y99ap
  • Rosewood-Ipswich train services suspended due to water on tracks at Rosewood. Buses arranged. http://alturl.com/c6yvq
  • All CityCat & CityFerry services expected to be out of action all day due to strong river currents –> http://twurl.nl/7bwxnl
  • Caboolture bus services cancelled. Visit http://translink.com.au for more.
  • All Kangaroo buses cancelled. Visit http://translink.com.au for more.

After about 12pm there were wide spread rumours – and a lot of direct questions were sent to TransLink about this – that public transport in the CBD was to be suspended at 2pm. This was what they broadcast in that period:

  • For more information on flood and weather affected services – http://twurl.nl/jct4cl
  • For information on the current status of flood affected services please refer to our website – http://twurl.nl/6z52j0
  • TransLink advises there are delays and disruptions on parts of the network. Services continue to run where possible.
  • Public Transport continues to run where possible – for latest disruption information see http://www.translink.com.au

At no point did they respond to the simple question “are services halting at 2pm”. The only rebuttal of that rumour came from the QPS Media service. After about 3pm they changed their message, and seemed to understand that people were understandably cranky:

  • Services are running throughout this afternoon. Expect delays & some cancellations. Check the website for service status info.
  • Our call centre is receiving a high number of calls, causing delays in answering. Check website for info to help us manage the call volume.
  • Trains are not operating to schedule this evening due to flooding. Services are still operating on all lines -> http://twurl.nl/z2i223
  • All train services at reduced frequency until further notice, some services have been suspended. Find out more –>http://twurl.nl/7c7esj
  • All train services suspended until 6am Wed 12 Jan. An hourly train timetable will then be in place, until further notice.

It’s no surprise that their website melted down after midday – note that virtually all their messages contained no useful information and just redirected to the website.
Successful use of Twitter as a meaningful and important information and communication tool recognised a handful of very key features of the service that distinguish it from many other services:

  • it is more like a broadcast service than an an asynchronous service like a web page;
  • messages should be considered ephemeral and only made meaningful by currency;
  • the tiny messages mean that it is accessible through an extremely broad range of mobile devices;
  • a very significant number of users use Twitter via mobile devices;
  • the infrastructure has evolved and been designed to support a staggeringly large number of simultaneous requests;
  • relevant information-rich messages are spread further and live longer than information-poor messages;
  • the service is inherently a two-way information flow, and questions and criticisms that flow back are indicators of errors or inadequacies in the outgoing flow;

I am hoping that organisations involved in this disaster take stock of how they used these services, and how these services can and should be used. The big lesson that can be learned here is that significantly more people have mobile phones with internet access than have battery powered radios.