Adventures with Docker Swarm

It’s been around 3 years since I last worked with Docker in any seriousness. At that time, the state of networking and deployment was quite rudimentary, and there was still reliance on deploying load balancers and similar infrastructure. I was very impressed then, when revisiting the “getting started” tutorials, at how straight-forward and powerful Docker Swarm now is.

I’ve built a small implementation of those tutorials to illustrate the ease with which a full stack can be deployed.

Four Questions For Engineers

One way of looking at the art and science of software engineering is that it is a process of mapping human desires and wishes – the insides of peoples’ heads – onto a computer system. This is not a particularly novel idea, and it’s one that you are probably familiar with, but it’s an important one. Engagement with a client can be boiled down to a conversation wherein we discover the client’s needs and wishes, and then present an instantiation of our interpretation of what they have expressed. There is an awful lot of chance for error in this. Mapping the contents of their heads to vibrations in the air and symbols on paper or a screen is a lossy process. Our interpretation of what we hear or read is a lossy process. Implementing the ideas, dreams and wishes into an information system is a lossy process. It’s a wonder software ever gets built at all.

Continue reading “Four Questions For Engineers”

Cross-Account use of AWS CLI

The documentation around using the AWS CLI from an AWS EC2 instance on one account to access resources in another account are not great. The information is all there, somewhere, but it’s scattered across many places and to derive what you need from those sources you have to pretty well read all the sources. Two useful places to begin, but you will need to spiral out from, are:

However, I’ll try to give a summary and simple example here. This won’t include code or detailed instructions to set this up, although I hope to follow this up with a code demonstration expressed in Terraform.

Continue reading “Cross-Account use of AWS CLI”

OpenSSL on HighSierra

Recently I finally got around to reading the excellent OpenSSL Cookbook from Ivan Risti? – you can grab a free copy via https://www.openssl.org/docs/ – and the first question in my mind was “what version of OpenSSL is already installed on my Mac”. A quick check showed it’s there pre-built in HighSierra in /usr/bin:

[code lang=text]
$ /usr/bin/openssl version
LibreSSL 2.2.7
[/code]

Continue reading “OpenSSL on HighSierra”

TLS 1.3 – It’s like Christmas

Via The Register I see that TLS 1.3 has finally rolled off the standards and committee draft assembly line. This is pretty big news, not least because we’ve been working with the current TLS 1.2 standard for almost a decade, and the defects in it have well and truly been discovered and exploited.

Continue reading “TLS 1.3 – It’s like Christmas”

Bootstrapping AWS with Terraform and CodeCommit

A rough model that I’ve been working on and thinking about recently is for the AWS account (or accounts) be put together so that there’s a “bastion” or “bootstrap” instance that can be used to build out the rest of the environment. There is a certain chicken-and-egg around this, particularly if you want to use AWS resources and services to bootstrap this up.

I’m going to talk (at length) about a solution I’ve recently gotten sorted out. This has a certain number of pre-requisites that I’ll outline before getting into how it all hangs together. The key thing around this is to limit as far as possible any manual tinkering about, and script up as much as possible so that it is both repeatable and able to be exposed to standard sorts of code cutting practices.

One caveat around what I’m presenting – the Terraform state is stored locally to where we are running Terraform, which is not best practice. Ideally we’d be tucking it away in something like S3, which I will probably cover at a later point.

Continue reading “Bootstrapping AWS with Terraform and CodeCommit”

Creating a custom Kylo Sandbox

I had a need – or desire – to build a VM with a certain version of NiFi on it, and a handful of other Hadoop-type services, to act as a local sandbox. As I’ve mentioned before, I do find it slightly more convenient to use a single VM for a collection of services, rather than a collection of Docker images, mainly because it allows me to open the bonnet of the box and get my hands dirty fiddling with the insides of the machine. Since I wanted to be picky about what was getting installed, I opted to start from scratch rather than re-using the HDP or Kylo sandboxes.

The only real complication was that I realised that I also wanted to drop Kylo on this sandbox, which happened after I’d already gone down the route of getting NiFi installed. This was entertaining as it revealed various ways in which the documentation and scripts around installing Kylo have some inadvertent hard-wired assumptions about where and how NiFi is installed that I needed to work around.

Continue reading “Creating a custom Kylo Sandbox”

Smoke testing Kafka in HDP

Assuming that you have a vanilla HDP, or the HDP sandbox, or have installed a cluster with Ambari and added Kafka, then the following may help you to smoke test the behaviour of Kafka. Obviously if you’ve configured Kafka or Zookeeper to be running on different ports, this isn’t going to help you much, and it also assumes that you are testing on one of the cluster boxes, and a ton of other assumptions.

The following assumes that you have found and changed to the Kafka installation directory – for default Ambari or HDP installations, this is probably under /usr/hdp, but your mileage may vary. To begin with, you might need to pre-create a testing topic:

bin/kafka-topics.sh
    --zookeeper localhost:2181 \
    --create --replication-factor 1 \
    --partitions 1 \
     --topic test

then in one terminal window, run a simple consumer:

bin/kafka-console-consumer.sh \
     --zookeeper localhost:2181 \
     --topic test \
     --from-beginning

Note that this is reading from the beginning of the topic, if you want to just tail the recent entries, omit the --from-beginning instruction. Finally, in another terminal window, open a dummy producer:

bin/kafka-console-producer.sh \
    --broker-list localhost:6667 \
    --topic test

There is an annoying asymmetry here – the consumer and most other utilities look to ZooKeeper to find the brokers, but the dummy producer requires an explicit pointer to one or more of the brokers. On this consumer window, type stuff, and you should see it echoed realtime in the consumer window. When finished, ^C out of the producer and consumer, and consider your work done.

Lies, Damned Lies and Programmers

I recently came across a really nice set – not directly related – of articles dealing with various profound errors that programmers and system designers fall into when dealing with names and addresses.

The TL;DR if you don’t read these: names and addresses are hard and most things you believe about them are wrong.

Let’s start with Falsehoods Programmers Believe About Names. Without even trying the author lists 40 things we believe about names that are just plain wrong.

In a similar vein, Falsehoods programmers believe about addresses, which particularly speaks to me. One of the fundamental errors about addresses is to think they identify a location. This is incorrect. An address might identify a location, but it is fundamentally a description which instructs a postman how to deliver a letter or parcel. Substitute pizza operative, Amazon driver or writ server as desired.

Even without getting into the weirdness around the actual shape of the planet, Falsehoods programmers believe about geography touches on place names.

And as a bonus: Falsehoods programmers believe about time – computers prove to be pretty bad clocks, and working out a calendar is very complicated.