Hear hear!

“Maybe you don’t work for or with a Global 2000 company, so I’ll let you in on a little secret: They Can’t Hear You! That’s right, the CIOs, and Enterprise Architechts, and, yes, even the journeyman programmer employed by these firms have no idea that there’s even a discussion going on. […] They don’t read technology blogs, they don’t know who DHH is, and they’re excited to get their new Vista gear.”

I was quite relieved when I read Pete Lacey’s post, as it means the world of cubicles hasn’t swallowed IT entirely. There are some companies where the CEO is blogging. Some might understand this as a marketing instrument and broadcast medium at first, but the medium will get them – there is no blogging without discussion. And if the CEO sets the example, employees are likely to follow.

There’s another, rather unpleasant aspect to this, however: If some (the?) blogging community unanimously votes for or against something, then this says nothing about the validity of this result for another, decoupled community like the Global 2000 flock. Just as an example, a discussion among technicians is likely to exclude commercial and organizational aspects, either willfully or because of lack of expertise (ignorance is such a hard word). A CEO would be maladvised to just copy their results.

Motivation

Darth Vader: You may dispense with the pleasantries, Commander. I’m here to put you back on schedule.
Moff Jerjerrod: I assure you, Lord Vader. My men are working as fast as they can.
Darth Vader: Perhaps I can find new ways to motivate them.

What is managing software development?

Raganwald: What is managing software development?
“The fundamental definition: “Management” is accountability for results, authority over resources, and making decisions based on judgement. […] and courage.”

Thinking about it, I wonder if I have ever seen anyone “managing” software to such an extent. In my reality, there are too many constraints, most of the time, and too much fog around the results. It might be nice for a manager to prove himself like that (“The project’s mine! All mine! And I did it!”), but then again his project might not be too much fun to work in.

Subtext Refactoring

A few months ago I tried to implement some refactorings for the Subtext “programming language”. I think I now understand some reasons why that did not work out.

A refactoring changes the shape of your program, while maintaining the program’s behaviour. You do it so the code gets into a shape that makes the changes you are aiming for easier to apply, and makes it easier to reason about their correctness. The correctness of the initial refactoring is guaranteed by your IDE.
Note that even a refactoring is a change; it is only guaranteed to be “correct” as long as you live in a closed world, where all the code you might interact with is contained in your IDE project. As soon as you leave that safe haven, using type casts, reflection, dynamic linking, or when you guarantee backwards compatibility for published interfaces, that’s over.

So, what about Subtext?
Firstly, Subtext does not have an external “behaviour” that might be observed. It just has a system state, and it’s up to the user to observe any interesting bits of that system state at any time, and even in retrospect. The system state encompasses both code, data, past states and an operation history. It’s all the same, actually, just nested nodes.
This makes it somewhat difficult to separate a program’s behaviour from the program itself. The solution I tried for transferring the refactoring idea is to guarantee some property of some usage of a node before and after a refactoring: “When invoked as a function with the same arguments as before the refactoring, the result subtree will look the same” – seems quite straightforward. I made a whole catalog of these.

However, here comes “Secondly”: Secondy, Subtext is completely untyped – at a sufficient level of abstraction, function invocation is the same as copying with changes, and is also the same as assignment, method overriding, and object creation, and it is exactly at this level that Subtext operates. Also, all the innards of a function are accessible (and overridable), not just its result. So essentially the only way to know what a “function” does to its arguments is to run the function. Especially, due to untyped higher-order data flow, an automated analysis can’t really tell where a node modified by the refactoring may eventually be copied to. But without this information, the refactoring cannot guarantee much about the system state. (I tried some heuristics, which turned out either too coarse or felt incomprehensible to a user).

Static types and other distinctions made by the programming language form the contract between you and your tools. As long as you stick to the types, your tools can and will help you. If you don’t have a type system, or the type system is too clever, you do not notice when you cross the line, and your tools become unreliable.

If a project is small enough and short-lived enough, and the language concise enough, you may not need tools as desperately. Some people also find tools like Eclipse or IntelliJ Idea too complex or heavy-weight and prefer to stick to their ASCII editor, and therefore focus on the “conciseness” aspect, or restrict the scope of their project, or rely on tests. That’s OK as long as you realize that by liberating yourself from types, you are also limiting yourself in various ways.

The hard way

“Academic discussions won’t get you a million new users. You need faith-based arguments. People have to watch you having fun, and envy you.”

Steve Yegge points out that culture matters, and may make Ruby the Next Big Thing.

When asked why we are using Java at work, I said “because we have to do things the hard way” – it’s the way that’s guaranteed to work, rock-solid, interoperable, stable, enough skilled engineers out there in the market place, enough googleable solutions for our everyday problems, will be actively maintained for years to come, Not My Fault if it does not run, etc.

If Ruby ever gets to that point, great. But until that day, everyone in middle management does their best to ignore the fun factor, and engineers will keep dreaming (of using Ruby, of owning an Apple Mac, of a world without deadlines, legacy and momentum).

The hard way means: having to wait until the Next Big Thing is the Current Big Thing.

Surprise

“Before that, like every beginner, I thought you could beat, pummel, and thrash an idea into existence. Under such treatment, of course, any decent idea folds up its paws, turns on its back, fixes its eyes on eternity, and dies.”

Ray Bradbury, Just This Side of Byzantium

Can Platforms be Agile?

Java is an excellent example of how you start out with a design that is rather “impure”, and has to be, in order to survive the initial competetive struggle; and then later, when there would be enough resources to realize some of the Real Cool ideas, you realize you can’t because you’re stuck with what and how you started.

One such example is the handling of primitive types. Auto-boxing is just a hack, trying to introduce “Everything Is An Object” through the back door. But the special cases introduced in the beginning stay in place.

A more recent example is the closures for Java / CICE discussion, where everyone agrees it’s going to be an addition to the Java language; no way you could replace some old feature made redundant by the new stuff. You cannot even extend the existing interfaces, because that would break implementations of those interfaces that exist out there in the wild.

There are fundamental decisions that cannot be changed easily once people have started building upon them. Operating Systems, Programming Languages, Platform APIs, Off-The-Shelf Software, they all have more than just a little legacy compatibility problem. The current anti-intellectual “agile” trend is good enough for applying those flexible, mature and well-thought-out processes and tools, but trying to develop and maintain a programming language in such a light-footed way means asking for desaster (or at least, bad quality).

A platform won’t stay competetive for long if you bolt on new things where they get in your way. If Saint-Exupéry is right, you need to remove things to increase consistency, making the language and platform more understandable and more usable.
The way to save your investment is to modularize, parallelize, interoperate, and deprecate. You can’t take things away from the Java platform, but you can provide increasingly consistent alternatives for parts of it (“nio”), interoperate with the old stuff for the time being, and hope for the old stuff to fade away. Note that this is not an agile approach – it does not involve a closed world assumption, it aims to keep long-term promises, and it ties significant resources (for maintaining several alternatives plus interoperability between them). Agile projects do not have a history; platforms do.

Most parts of Java will be with us for years to come. So please, don’t spoil the Java language and syntax with a mix of redundant, overlapping, complex features; don’t add something you may want to remove later. If it’s not proven yet, it doesn’t have to be part of the platform; just provide the hooks for others to provide it and experiment with it. I’d rather have a mix of language environments that Play Well With Others, with a JVM that acts as a “Common Language Runtime”, than a monolithic superplatform that assimilates every feature under the sun.