The Schaffhouse Project

This is the other project. The Company of St George has invited me to take part in an event in Schaffhouse, in late July (yes, I know it’s just May now). And the prospect has me, frankly, terrified. For me this is kind of like someone who occasionally sings a bit being invited to perform as a soloist with a full orchestra at Carnegie Hall in front of the US President and the British Queen. Or a weekend rambler attempting to climb Everest without oxygen.

I’m excited about the prospect, but also very apprehensive. It’s not the aspect of being on public display and presenting a convincing portrayal of the right social class. It’s rather that I don’t have a good feeling about most of my kit at the moment, and don’t want to embarrass the Company.

The kit challenge is not the only part of this which has me worried. I’m not entirely sure how I’m going to get to Schaffhouse, and speak no German and only the very vaguest memory of bad Australian high school French. I probably have the ability to say “S’il vous plaît” and “Danke” at the right times, but am as likely to come out with “Grazi” and “Por favour”. I may be able to drive to the site, although I’ve never driven on the right of the road.

My kit challenge is a big enough issue. The thing about my gear is that it’s virtually all been made by myself in Australia using the very limited range of cloth available there, or sourced from local suppliers who generally work with a lot less rigour (and a lot less access to originals and good research) than suppliers here in the UK and Europe. All of this is a large part of why we came over, and my intention had been to spend last year replacing most of my kit. This didn’t happen as the pressures of work, finance and sorting out my partner’s visa completely obliterated any ability to cogently and intelligently approach the problem.

So what I am going to do here, and follow up with more posts as I try to tackle this problem, is collate a list of what I need to have compared to what I have, and get it distributed to other eyes in order to gain advice in the short time I have to get this sorted out.

The Company Men’s Clothing Guide (V 1.1, 2009) is the source of the following statement of basic required kit:

Every member should aim to have the following:

Hat
To be worn at all times. Extravagant styles to be avoided!

Shirt
Linen (off-white). You should have at least two.

Braies
Linen underpants. It is usually off-white, some rare German artworks shows black braies. All male members should wear them or go without!

Doublet
Woollen, with sleeves.

Hose
Woollen, woven; cut on the bias.

Red livery jacket
A red wool company soldier’s jacket. This is the livery issue coat of the Company and every man should have one.

Sleeveless red livery jacket
Same as above without sleeves. A good alternative for warm weather or over armour.

Hood
Preferably half red, half off-white.

Shoes
Strongly made turnshoes.

Belt
A narrow belt with correct medieval buckle.

Purse
Wear a small neat purse with a minimum of useful 15th century contents: comb, money, kerchief, etc. Think of what you really need to carry.

Cloak
Not essential, but wonderful for cold weather and to sleep in. Must be of woollen cloth

Burgundian Livery
Should be worn by all active military personnel who are veterans or recruits. It should be made according to the official pattern, preferably with the woollen cloth issued by the Company or the closest one available.

Knife
Have a small general purpose one in a sheath or in your purse. Do not hang cups, spoons, bags, scissors and bits and pieces from your belt!

Eating utensils
Spoon, cup bowl and/or plate, all of 15th century design

Bedding
Blankets, sheets and a canvas bag, big enough to fill with straw as a mattress. They can all be rolled up in the canvas bag for travelling.

Armour
Soldiers should aim to acquire a helmet and a body armour (a simple jack, breastplate or brigandine) during their first year as “veterans”.

Weapons
A simple dagger or short sword is a minimum.

Washing
A piece of soap and linen towel. Everyone is allowed one small “private” bag for modern necessities.

Badges
Company badges are to be worn by full members only! No badges are to be worn on the company red jacket except the metal Co.St.Geo. shield badge. Cloth Co.St.Geo. badges may be sewn to cloaks, watch coats, etc. Other badges are restricted, and should be checked with an officer prior to wearing.

I’ll wind up doing a page or post for each of those items, with accompanying photos, but my initial thinking is thus:

Hat – I have a tall felt hat that’s reasonable, and a woollen sock hat that is ugly and silly, but also pretty good. I would be happy to get another, or reclaim the black wool ‘acorn’ hat my partner usually wears.

Shirt – I’ve replaced the too-white and too-short linen shirts that I’d brought with me with a new off-white and much longer shirt, and probably have enough of the same linen to knock up another one.

Braies – I have three or four good pairs, taken directly from older versions of the clothing guide, and now comfortably worn in. This and the shirt is probably my best kit.

Doublet – The two doublets I have are neither really of the right sort of wool, and should be replaced. I do have a good linen pour point / petticoat that I’ve just finished that I can use under my jack to hold my hose up in place of the doublets I was wearing last year.

Hose – one of the pairs I have (the green, footed ones) are not the right sort of wool, but the red ones (which have a stirrup under the foot) are ok. The trouble is that both of them are cut in what the Guide deems to be the later period style, with the seam up the back. If I could find suitable wool in a hurry, I do have the patterns for both pairs, and so could probably build a more correct pair.

Livery Jacket – I do not have this at all and would need to build it or acquire it. This has me worried, as I’m not sure about getting the correct colour.

Hood – Do not have, but this may be available from somewhere on the market

Shoes – definitely need to replace them. The turnshoes I made myself are not bad, but as they are low shoes they need footed hose. Also they have been resoled so many times by having soles clumped on, they are looking pretty battered. I have a pair of ill-fitting knee boots that are a bit early in style,  and not really suitable.

Belt – I’ve got several good belts with simple buckles and chapes

Purse – Two that I made are not bad, and are probably acceptable, but I’m going to have a look at others as well. I’ve got a variety of handkerchiefs, dice and other bits and pieces that can go in them.

Cloak – do not have. While this is optional, I suspect it would be good to have in the evening. This is a very low priority. I do have my giant blue watch coat, which is probably ok at night without the public about, but I’d need to take the synthetic Burgundian badge off the breast.

Burgundian Livery – I’m not sure what to do here – again, I’m nervous about making this because I’m not sure I’d get the colours right, and that’s one item that would look bad to be too variant from other people.

Knife – All good here, I have a knife good for my belt as an eating knife, and a slightly larger one that’s good cooking and so on. Both are plain design, and just look like good simple ware.

Eating Utensils – I’m fine for spoons, having both horn and pewter, and have probably acceptable wooden plates and bowls. I’ve got a very good large tankard from Flaming Gargoyle, but it may be a bit large to transport and a smaller cup would be good.

Bedding – I will have to find out how authentic this needs to be. If I can lay my hands on canvas, this should be achievable. If not, I’m going to need that cloak and hope to find a rock to use as a pillow.

Armour – Thanks to Paul, I’ve got a good jack that fits me (the one I made in Australia was a lovely fit, until I grew out of it around the middle), and the breastplate over it is fine – when I had it made, I deliberately went for a very simple style. I replaced the sallet last weekend with a nice Burgundian styled one from Rebellum Armouries, and the gauntlets I have are good, albeit a little fancy for the rest of my kit. For most purposes I’m happy just to go bare handed or wear the three-fingered mittens (deer skin) that I made. I don’t think I need to adjust any of this other than take the St George cross back off the Jack sleeve. The mail standard around my throat is split ring, not riveted, so if I could not replace that easily I would just be leaving it behind.

Weapons - I have no idea, and will have to enquire, whether they want live weapons or blunted. The baselard  I have which is not bad, and my arming sword, are ok but rebated. The scabbards for both are rubbish, although the belt for the sword scabbard is good.

Washing – should be ok if I can find a linen towel for sale somewhere.

Badges – we don’t need no steeenking badges. I’d not take any with me, and would leave my somewhat rude hat badge off.

So there you go. I’m finding this profoundly daunting, and it’s scaring me.

If it comes to it, and I cannot get this sorted, I’ll pull out, or else offer to go at the end of the event to meet people and help with the pack up. The one thing I do not want is to do anything or present anything which would embarrass my hosts.

A New Project

So last weekend was at Wrest Park with the Beaufort Companye – although we were somewhat in disguise as just generic English troops, and not wearing the blue and white livery. Wrest Park had been our first experience with the Beauforts last year, and that’s one of the reasons I think I will be very fond of it, but as an event it’s a delight. The venue itself is lovely, but the event is also fairly small and relaxed, with a low-stress pace and the opportunity to stroll around the huge garden in relative peace and tranquility.

On the Saturday evening one of the groups (whose name I keep losing in my head) which has strong crossovers with KDF Nottingham did a brief lesson on some (real) longsword and messer techniques for the re-enactors. It was nothing too exotic – some people were looking at the Zwerchhau with messer, and others were doing a simple counter cut into an Oberhau – but even being shown how to stand and step better, and how to hold the sword, were an eye opener for many re-enactors.

Given the enthusiastic response then, we spent a chunk of time on Sunday morning just training and drilling with Federschwert, and talking the public through what they were seeing. So yes, one of the new projects for Beaufort’s purposes will be doing a lot more of this, and doing it in a more structured fashion.

And once we get the roof-racks on the car, and are taking pole weapons to events, we may even introduce some of the evil and exhausting pole weapon drills.

There’s another giant new project started too, which I will write about later today.

Singletons considered harmful

Ok, I know it’s not a new observation, but the Singleton pattern must be one of the most overused, and abused, patterns that the Gang Of Four described.

This is on my mind this week as I’m working on a body of code that has way too many Singletons. I must emphasise that ultimately it’s my problem, not the original author’s, as I dropped the ball over a year ago and did not review the design and implementation. The problem has come home to haunt me as I introduced just one change too many and all the tests began to fail.

Particularly in this case, while looking at test coverage I wondered why a pretty important piece of life cycle management wasn’t being traversed in tests. Which led me to have a close look and realise that it was buggy, and failing out right at the start of execution during tests. So I fixed that, and all the tests threw up because the Singleton in question was no longer in the expected state.

My main gripe with Singletons is that they run headlong into one of the cardinal rules of unit testing: all tests should be entirely independent of each other. The problem with a Singleton – particularly one that has some sort of lifecycle – is that suddenly tests are connected by the internal state of an object that may not even be the unit under test. Which leads to unstable tests prone to mystery failures. And unstable tests lead to a lack of confidence in the validity of the code.

Now, I’m going to need to articulate this to other coders to head off any repeat of this problem, so it’s worth my while to hand wave about when Singletons are appropriate, and when other techniques are better.

To begin with, I often see Singletons introduced to provide static pieces of code. I strongly suspect that this is because the coder does not understand how static methods and attributes work, or simply forgets. Probably the biggest single clue that these cases should not be implemented as Singletons is that they have no persistent state.

When talking it through with the team, both zoomed in to that idea from two different directions with little prompting: by thinking about the code construct (the Singleton pattern) instead of thinking about the data, it is way too easy to not see that the Singleton pattern gives the data state a different scope and different life cycle to other code.

In the space I’m mainly playing in, it’s fairly common to have a bunch of threads handling incoming requests from some external agency, all in kind of similar ways. This transactional model, if inverted to be data centric, can be summarised as: accept data, map it onto an output state, and throw away any working state in preparation for the next request. In Java terms the scope of all data is local to the thread. The data state of the Singleton, however, is at a higher level – an application or service level. Thus objection one: Singletons cause data states at different levels of abstraction or different levels of management to be promiscuously mixed.

This immediately leads to objection two: Singletons easily cause cross-thread side effects, as they bind threads together in non-obvious ways. This problem can be lessened if the Singleton provides read-only state, in which case it might be better done using static attributes, and if the potential side effects are well documented and described.

Objection three is somewhat more of an aesthetic gripe. The common ways in which Singletons are usually implemented in Java, apart from not being as thread-safe as they appear to the naive eye, beaks the doctrine of Separation of Concerns. The Singleton class has two responsibilities, not just one, which is a very bad smell: it is responsible for whatever it’s purpose in life is, and it’s responsible for making sure it’s alone in the universe.

There are a variety of ways of getting around this bad smell. A lot of runtime containers – be it simply the JVM firing up with some single instance of a class providing main(), or Spring or a web application server taking care of the “only one” behaviour behind the scenes – provide a trustable context for which you can say “if I make just one of these objects, and put it in that context, there will only be one of them”. In the case of the examples above, as well, it means that we have the instance of the object in some sort of “application” or “service” scope, with a life cycle that can be tied to the broader context.

At a bare minimum, if you cannot identify or obtain access to the application context, you should aim to separate out the two concerns – provide a class that does stuff, and a class that holds a single instance of that do-stuff class. While adding a little bit of extra boiler plate code, this simple change suddenly means you can test the two behaviours independently and that you can have thread-local instances injected in the scope of your independent unit tests.

And a final objection, primarily aesthetic. There are a bunch of different ways to build a Singleton in Java. Not all of them are thread safe, and it’s annoyingly difficult to do lazy instantiation in a thread safe manner, particularly if you want there to be exactly one run through a costly process. The ugliness arises because generally the methods to be thread safe are clunky kinds of fiddles that require the coder to think about the behaviour of the JVM instead the behaviour of their code. There’s that separation of concerns biting us in they arse again.

I do not think the pattern is to be universally avoided though. It’s highly probable that the application or service scope is stateful, and has a well defined life cycle. Like it or not, the life cycle state is a single piece of information that needs to exist at a different level of abstraction to the per-thread state (unless you are fortunate enough to be able to think entirely at a thread level, and there genuinely is no application level state).

As an example, I’ve fallen into the habit of using a roughly MVC architectural pattern. Sometime I will go into this in detail, but for now simply accept that it’s a handy simple framework to hang more complex behaviour of, while encouraging the decomposition of the code into easily testable parts. In my case, the ‘view’ is often provided as servlets, often with a RESTful design, and not necessarily provided by a single class. It’s pretty common for me thus to not have an accessible application level context without using Spring or similar. In these instances, I tend to use the Controller layer to hold the application-level state, and manage the application-level lifecycle. Of course, this is easily abused as well, as without paying attention you can find all sorts of pieces of code dialling home to the controller layer or object, but at least by separating the singleton aspects from the controller aspects, you can make the opportunity to not bind tests together.

Let me leave you with a thought experiment: if I have a simple web application with just a single servlet class, does that servlet class provide a single-instance application level context?

Journalled Out

I’ve been thinking in recent days that I could use something journal-ish. There are two aspects to this thinking. For one, I tend to accumulate documents and links to things that will probably be useful someday, or I want to remember short-term, but they get smeared everywhere. Bookmarks across several machines and browsers, text documents tucked into folders optimistically labelled ‘to-do’ or ‘in progress’, stuff in various note-taking applications. All of which leads to a definite sense of mental clutter which I really want to eliminate. I have identified that one of the things that makes me anxious is physical and mental clutter, a sense of being overwhelmed by Stuff To Take Care Of Right Now.

It would be nice just to declare mental bankruptcy, throw all this in the bin, tear off my clothes, and run naked into the woods to live as a wild man, feeding on berries and roots. Regrettably while this simple life has certain attractions – not the least being an opportunity to dispense gnomic wisdom and entirely fabricated home-spun philosophy to unsuspecting passers-by – it does not appear to be paid particularly well anymore. Besides, brambles, briars and badgers are not a good match for running naked through the woods at my age.

Initially I’ve been thinking about something like Day One, which has the attraction of being somewhat insulated against future obsolecence (as far as I can tell, the data is stored in individual PLIST files), as well as having a frictionless interface. That’s important. The benefit of pencil and paper is that it’s always on. The disadvantages for me are that I cannot read my own handwriting, and generally cannot fit a usefully large notebook in my pocket. Also, so much of what I need to refer to comes with a URL or an image associated with it, there’s friction arising from needing to manually link together disparate data repositories.

The elephant in the room for all of this (see what I did there) is of course Evernote. I was startled to discover how many apps I already have on phone, iPad and desktop natively link to Evernote, and the environment Evernote occupies is rich and varied. Which makes me a little nervous: if I went this way, would I then still have different bits of data scattered across multiple interfaces? Additionally, even though they appear to be an honest and reliable company, the product still revolves around having my data on servers for a ‘free’ service.

Sigh. Thinking is in progress.

Glued to a Screen

There’s really something quite odd, if you consider it, about watching movies while flying. Hurtling half-way around the globe at something like 900 kms/hour, something on the order of 14 or 15 km above the surface, eyes glued to a small screen in the back of the seat before you.

I did not sleep at all on the trip from London to Brisbane. 11 hours Heathrow to Kuala Lumpur, 8 hours to Brisbane. Which gave me the opportunity to catch up on quite a number of the movies I missed through 2013:

  • Wolverine
  • Pacific Rim
  • Kick Ass 2
  • Elysium
  • RED 2
  • Avengers
  • Despicable Me 2

Just in case you were wondering.

A Certain Quality

Java is not the best of languages. There are plenty of languages better for particular niches or uses, and it’s littered with annoyances and prone to abuses. So are C, COBOL and Fortran. But it’s good enough almost always, and the environment that has grown up around it has made it a useful language for building reasonably performant web-facing server products. One thing that is a standout though is the ease with which Java can reflect on itself and examine itself at runtime.

This has opened the door for a number of community led tools that allow us to declare quality standards, and automatically monitor and control adherence to those standards. These are powerful ideas: coders can relax and focus on the task at hand, secure in the knowledge that the surrounding infrastructure will maintain the quality of the code. It’s like a writer with a word processor and a good editor: spelling errors will get sorted out immediately, and somewhere down the track the grammar and prose will get beaten into shape.

There are now a good mix of static and dynamic analysis frameworks out there, and I’ve settled on Findbugs, Checkstyle and Jacoco as the core. PMD is in the mix as well, but more as a backstop for the other tools. The thing that appeals to me about these three is that the analysis they will do, and the standards they mandate, can be declared via the same Maven POM as the rest of the build definition – and in the IDE as well – so that quality control is baked in at the lowest level of development activity.

Because these are declared quality standards, it means that our Jenkins CI tool can use the same declaration to pass or fail a build – code that does not meet required standards cannot progress out of development, and Jenkins provides visibility of the current level of code quality. Jenkins is not so good, though, at showing longer term trends, which is where Sonar comes in. I was delighted to discover that Sonar had become freely available as SonarQube, as it’s a fantastic tool for seeing at a glance if there are quality trends that need to be addressed, and for expressing complex code quality issues in a cogent fashion.

The tool chain then is trivially simple for the developer to use. Maven and the IDE on the desktop tell her immediately if there are code quality issues to address before committing. On commit, the Jenkins CI build is a gatekeeper that will not allow code that does not meet certain basic criteria to pass. Finally Sonar gets to look at the code and see how it is progressing over time.

I am pleased with this tool chain for two reasons. First, code quality is an integral part of the developers daily experience, rather than something bolted on that happens later and is somebody else’s problem. Quality becomes a habit. Second, the process is entirely transparent and visible. The hard code quality metrics are right there for all to see (for certain values of “all”, they do require authentication to examine) and are visibly impartial and objective, not subjective. If I commit something dumb, it’s not a person telling me he thinks I’m wrong. The quality of my work is not only my responsibility, I have objective benchmarks to measure it against.

This sort of toolchain exemplifies in my mind a mature approach to technology by automating standard procedures, and automating whatever does not need human intervention. It’s madness to repeat any process that can be automated, more than once or twice, and the time and cost saving of automated quality control compared to manual quality control is enormous. The drawback is that setting up – and to some extent maintaining – the tool chain is non-trivial, and there is a risk that the cost of this setup and maintenance can deter enhancement or rectification of flaws in the toolchain. An interesting implication of this is that the elements of this tool chain – Jenkins, Sonar and so forth – should be treated as production environments, even though they are used to support development. This is a distinction frequently lost: this stuff needs to be backed up and cared for with as much love and attention as any other production infrastructure.

Now, not everyone appreciates the dogmatism and rather strong opinions about style implicit in the toolchain, particularly arising from Checkstyle. Part of the point of Checkstyle, Findbugs and PMD is that, like it or not, they do express the common mean generally accepted best practices that have arisen from somewhat over 15 years of community work on and with Java. They’re not my rules, they’re the emergent rules from the zeitgeist. There are really two responses if these tools persistently complain about something you habitually do in code, that one thing that you always do that they always complain about. You can relax or modify the rules, build in local variations. Or you can stop and think, and acknowledge, that maybe, just maybe, your way of doing things is not the best.

They are, after all, fallible automated rules expressed through fallible software. They are not always going to get it right. But the point of the alerts and warnings from these tools is not to force the coder to do something, but to encourage her to notice the things they are pointing out, encourage her to think about what she is doing, encourage her to think about quality as part of her day-to-day hammering on the keyboard. I’d rather see fewer, more beautiful lines of code, than lots of lines of code. It’s not a race.

I find it interesting that being able to objectively measure code quality has tended to improve code quality. Observation changed the thing being observed (is that where heisenbugs arise?). There’s not a direct relationship between the measuring tools and the code quality. Rather what seems to have happened is that by using the toolchain to specify certain fixed metrics that must be attained by the code in order for that code to ‘pass’ and be built into release artefacts, then the code changes made to attain the metrics have tended to push the code to cleaner, simpler, more maintainable code. I am aware that there are still knots of complexity, and knots of less than beautiful architecture, both of which I hope to clean up over the next year, but the point is not that those problem areas exist, but that they are visible and there’s going to be an objective indication of when they’ve been eradicated.

There seems to be a lower rate of defects reaching the QA team as well, although I don’t have a good handle on that numerically – when I first started noticing it, I neglected to come up with a way of measuring it, and now it’s going to be hard to work it out from the Jira records. (The lesson of course being: measure early, measure often.) In general the defects that seem to be showing up are now functional and design problems, not simply buggy code, or else the sorts of performance or concurrency problems that really only show up under production-like load which are difficult and expensive to test for at the development stage as a matter of day-to-day development.

There is a big caveat attached to this toolchain though. I’m a fan of an approach that can be loosely hand-waved as design-by-contract. There’s value in expressing exposed functional end-points – at whatever level of the code or system you pick – in terms of statements about what input will be accepted, what the relationship between input and output is, what side-effects the invocation has, and so forth. Black box coding. As an approach it fits neatly against TDD and encourages loose coupling and separation of concern. All very good things. In practical terms, however, it depends on two things: trust that the documentation is correct and the contract matches the implementation, and trust that the implementation has been tested and verified against the contract. If those two things can be trusted, then the developer can just use the implementation as a black box, and not have to either delve into the implementation code, nor build redundant data sanitisation or error handling. At the moment, there’s no automated means to perform this sort of contract validation. The best option at this point seems to be peer code reviews, and a piece of 2×4 with nails in it (1), but that’s expensive and resource intensive.

The bottom line reason for investing in a tool chain like this – and make no mistake, it’s potentially expensive to set up and maintain – is that if you have a typical kind of team structure, it’s easy for the developers to overwhelm the QA team with stuff to be tested. The higher your code quality, and the more dumb-ass errors you can trap at the development stage, the less likely it is that defects will get past your harried QA guys.

(1) It’s like I always say, you get more with a kind word and a two-by-four than with just a kind word. – Marcus Cole

The Wall

I went to the Imperial War Museum a few days ago – sadly it’s largely under construction at the moment, but should re-open in Summer 2014, so I can go back. Outside the front entrance is a section of the Berlin Wall.

The side that faced the west:

The side that faced the east:

And the narrow divide between the east and the west:

Perhaps 6 inches of poorly made concrete and rebar, forcing a psychological chasm that seemed for such a very long time to be completely unbridgeable. It’s worth thinking about.

Java 7 JDK on Mac OS X

This is one of the things that Apple should be kicked in the shin for. There is no excuse for continuing to completely foul up Java installation on Mac OS X

If you are like me, and trying to figure out how to get the Java 7 JDK installed on the latest build, here is the key: http://stackoverflow.com/a/19737307

The trick for me is probably the trick for you:
1) download the JDK from Oracle
2) run the downloaded DMG to install
3) modify your .profile or .bashrc or wherever you have it to include

JAVA_HOME=$(/usr/libexec/java_home)
export JAVA_HOME

4) make another cup of coffee and curse.

On Testing

I really should do a write-up about the CI and code quality infrastructure that I’ve set up, as in recent months it’s really started to pay off for the considerable effort it’s cost. But that’s not what’s on my mind today.

Rather I am struck by how easy it is to really stuff up unit tests, and how hard it is to get them right. I’m not so concerned with simple things like the proportion of code that is covered by tests, although that is important, so much as the difficulty of testing what code should do instead of what it does do. This is not simply an artefact of TDD either, although one of the problems I have with TDD is that it can lead to beautiful tests accurately exercising code that does not actually meet requirements – it worries me that Agile is often treated as an excuse not to define or identify requirements in much depth.

Two examples that I’ve seen recently – and have been equally guilty of – stand out.

First is falling into the trap of inadvertently testing the behaviour of a complex mock object rather than the behaviour of the real object. I’ve been on a warpath across the code for this one, as in retrospect it reveals bad code smells that I really should have picked up earlier.

Second is testing around the desired behaviour – for instance a method that transforms some value into a String, which has tests for the failure case of a bad input, tests that the returned String is not blank or null, but no tests that verify that the output for a given known input is the expected output.

In both cases it feels like we’re looking too closely at the implementation of the method, rather than stepping back and looking at the contract the method has.

Testing. Hard it is.

Deserialising Lists in Jersey Part II

or “Type Erasure is not your friend”

The solution I outlined in my previous has one big drawback (well, two, actually): it does not work.

The trouble is that the approach I suggested of having a common generic function to invoke the request with a GenericType resulted in the nested type being erased at run time. The code compiled, and a List was returned when the response was deserialised, but Jersey constructs a List of HashMap, rather than a list of the declared desired type.

This is extremely puzzling, as I expected that this would collapse at run time with type errors, but it didn’t. My initial thought when this rose up and bit me – and consumed a lot of time that I could ill afford – was that there was a difference in behaviour between deployed run time and running with the Jersey test framework. I was wrong – when I modified my test to examine the content of the returned List, it showed up as a fault immediately.

A diversion: this shows how very easy it is to stuff up a unit test. My initial test looked something like:

List<Thing> result = client.fetchList();
assertNotNull("should not be null", result);
assertFalse("should not be empty", result.isEmpty());

which seems pretty reasonable, right? We got a List back, and it had stuff in it, all as expected. I did not bother to examine the deserialised objects, because I was doing that on a different method that simply returned a Thing rather than a List– that’s sufficient, right? We know that deserialisation of the JSON body is working, right?

Extending the test to something that you would not automatically think to test showed up with a failure immediately:

List<Thing> result = client.fetchList();
assertNotNull("should not be null", result);
assertFalse("should not be empty", result.isEmpty());
assertTrue("Should be a Thing", TypeUtils.isInstance(result.get(0), Thing.class));

The List did not actually contain Thing instances.

A quick solution was obvious, although resulted in duplicating some boilerplate code for handling errors – drop the common generic method, and modify the calling methods to invoke the Jersey get() using a GenericType constructed with a specific List declaration.

This did highlight an annoying inconsistency in the Jersey client design though. For the simple cases of methods like

Thing getThing() throws BusinessException;

then the plain Jersey get() which returns a Response can be used. Make a call, look at the Response status, and either deserialise the body as a Thing if there’s no error, or as our declared exception type on error and throw the exception. Simple, clean and pretty intuitive.

In the case of the get(GenericType) form of the calls though, you get back the declared type, not a Response. Instead you need to trap for a bunch of particular exceptions that can come out of Jersey – particularly ResponseProcessingException – and then obtain the raw Response from the exception. It works, but it’s definitely clunkier than I would prefer:

public final List<Thing> getThings() throws BusinessException {
  try {
    List<Thing> result = baseTarget.path(PATH_OF_RESOURCE)
        .request(MediaType.APPLICATION_JSON)
        .get(new GenericType<List<Thing>>() {});
    return result;
  } catch (ResponseProcessingException rep) {
    parseException(rpe.getResponse());
  } catch (WebApplicationException | ProcessingException pe) {
    throw new BusinessException("Bad request", pe);
  }
}

Note that we get either WebApplicationException or ProcessingException if there is a problem client-side, and so we don’t have a response to deserialise back to our BusinessException, whereas we get a ResponseProcessingException whenever the server returns a non-200 (or to be precise anything outside the 200-299 range) status.

Of course, all of this is slightly skewed by our use-case. Realistically most RESTful services have a pretty small set of end-points, so the amount of boiler plate repeated code in the client is limited. In our case we have a single data abstraction service sitting between the database(s) and the business code, and necessarily that has a very broad interface, resulting in a client with lots of methods. It ain’t pretty but it works, and currently there’s a reasonable balance between elegant code and readable code with repeated boiler-plate bits.