One thing that I’ve noticed over a quarter of a century of banging out code is how the expectations for what a coder will know has expanded enormously.
When I began, the expectation was that you had an understanding of how computers roughly worked – the old CPU plus memory plus storage model that we’ve had since the beginning – and facility in one or two languages. Cobol, Pascal, Basic, Fortran, some assembler. It was anticipated that you’d be able to sit down with a manual and a compiler, and teach yourself a new language in a few days. The important part was knowing how to think, and how to really look at the problem. And of course, how to squeeze every last cycle out of the CPU, and do amazing things in a small memory footprint.
Around the turn of the century, there was not much change. Your average coder was expected to be comfortable working in a three-tier architecture, to have some vague idea about how networks and the internet worked, be comfortable with SQL and a database or two, to have some notion of how to work collaboratively in a multi-discipline team. And of course, to have a deep understanding of a single language, and whatever the flavour-of-the-month framework or standard libraries existed. UML and RUP were in vogue, but Agile was newfangled, and here was a wall of design documentation to ignore.
Now is the age of the ultra-specialist. You need to be server side, or middleware, or client side. You need to know a language intimately, and be vastly knowledgable about half a dozen ancillary technologies – in the Java world, for instance, you need to grok Spring, and JMS, and JMX, and Hibernate, and Maven, and a CI tool, and a specific IDE. You need to understand crypto, and security, and enterprise integration and architectural patterns, and networking.
I fear that this rant has gone vague and off the rails. There is a strange paradox in place now: we are expected to specialise deeply in the problem spaces we address, but carry in our heads a hugely expanded toolset.