Mutability, Arrays and the Cost of Temporary Objects in Java

In his 2001 must-read book, Effective Java, Joshua Bloch said in one item “Favor immutability”. Java theory and practice: To mutate or not to mutate? provides an excellent overview of what this means and why it matters. It states:

An immutable object is one whose externally visible state cannot change after it is instantiated.

A Brief Overview of Immutability

Let’s say you want to create a class to model arbitrary precision rational numbers (ie fractions). A mutable version might start out:

public class BigRational {
  private BigInteger numerator = BigInteger.ZERO;
  private BigInteger denominator = BigInteger.ONE;

  // constructors and so on

  public BigRational add(BigRational other) {
    if (numerator.signum() == 0) {
      numerator = other.numerator;
      denominator = other.denominator;
    } else if (other.numerator.signum() == 0) {
      // no action required
    } else if (denominator.equals(other.denominator)) {
      numerator = numerator.add(other.numerator);
    } else {
      // this could be optimized for greatest common divisor
      numerator = numerator.multiply(other.denominator).add(other.numerator.multiply(denominator));
      denominator = denominator.multiply(other.denominator);
    return this;

  // etc

This is how many classes naively start. The problem comes when this class is used as a data member or a parameter. Consider this class:

public class MyClass {
  private BigRational data = new BigRational();

  public BigRational getData() {
    return data;

Doing this will modify the internal state of the class:

MyClass mc = new MyClass();
BigRational rational = mc.getData();

Obviously this behaviour isn’t desirable, which leads to the practice of defensive copying, a practice familiar to any C programmer. Each time this getter is called a temporary copy is created so the internal state of the class isn’t violated.

One of the biggest early errors in Java’s design was that the Date class is mutable. This means that any API that uses dates either has to use a non-standard date class or defensively copy date instances. Oddly, Java got String, BigInteger and BigDecimal all right as they’re all immutable. Even stranger, the later Calendar class (introduced in JDK 1.1) was also made mutable.

An immutable version would look something like:

public class BigRational {
  private final BigInteger numerator;
  private final BigInteger denominator;

  public BigRational() {

  public BigRational(BigInteger integer) {
    this(integer, BigInteger.ONE);

  public BigRational(BigInteger numerator, BigInteger denominator) {
    if (denominator.signum() == 0) {
      throw new IllegalArgumentException("denominator cannot be zero");
    if (numerator.signum() == 0) {
      this.numerator = BigInteger.ZERO;
      this.denominator = BigInteger.ONE;
    } else {
      this.numerator = numerator;
      this.denominator = denominator;

  public BigRational multiply(BigRational other) {
    if (numerator.signum() == 0 || other.numerator.signum() == 0) {
      return new BigRational(BigInteger.ZERO);
    } else if (denominator.equals(other.denominator)) {
      return new BigRational(numerator.add(other.numerator), denominator);
    } else {
      return new BigRational(numerator.multiply(other.denominator).add(other.numerator.multiply(denominator)), enominator.multiply(other.denominator));

  // etc

Arrays are Mutable

The big problem with all this is that Java arrays are mutable. So for example:

public void doStuff(String args[]) {
  args[0] = "Hello world";


String arr[] = new String[] { "one", "two", "three" };
System.out.println(arr[0]); // Hello world

This is one big reason why you should use Lists instead of arrays in almost all circumstances where you have a choice. Lists can be made immutable:

List<String> list = new ArrayList<String>();
final List<String> immutableList = Collections.unmodifiableList(list);

On a side note, this rather verbose syntax gets a little easier in Java 7 with collection literals, for example:

List<String> list = ["one", "two", "three"];
final List<String> immutableList = Collections.unmodifiableList(list);

Enum Values

Java 5 introduced typesafe enums, largely based on Joshua Bloch’s proposal (that used classes). The unofficial versions had lots of potential issues (eg having to implement readResolve() to cater to serialization creating new instances). In my opinion, Java’s enums are one of (increasingly few) significantly better language constructs in Java compared to, say, C# as Java’s enums aren’t just thinly wrapped integers (as they are in C/C++/.Net) and can also have behaviour.

Java enums have a static method called values() which returns an array of all instances of that enum. After the lessons of the Date class, this particular decision was nothing short of shocking. A List would have been a far more sensible choice. Internally this means the array of instances must be defensively copied each time it is called forcing you to write code like this repeatedly:

public enum Season {

  private static final List<Season> VALUES =
      new ArrayList<Season>(Arrays.asList(values())));

  public static List<Season> getValues() { return VALUES; }

Is This Really Necessary?

There is nothing inherently wrong with temporary objects. It’s a question of degree. So creating several hundred (or thousand) temporary objects isn’t any big deal. At some point there is such a thing as too many.

I will demonstrate two things here:

  1. The cost of temporary arrays; and
  2. The correct way to generate random enums.

This post was prompted by Java: Pick a random value from an enum? where the poster created a Random on every iteration. Some years ago this was bad because Randoms were seeded with the current time (in milliseconds or even seconds) so you wouldn’t get a particularly random distribution if called in a short space of time and it’s worth answering the question of fairness.

This enum will be used:

public enum Season {

  private static final List<Season> VALUES1 =
          new ArrayList<Season>(Arrays.asList(values())));
  private static final Season[] VALUES2 = values();
  private static final int SIZE = VALUES2.length;
  private static final Random RANDOM = new Random();

  public static Season random1() {
    return values()[new Random().nextInt(SIZE)];

  public static Season random2() {
    return values()[RANDOM.nextInt(SIZE)];

  public static Season random3() {
    return VALUES1.get(new Random().nextInt(SIZE));

  public static Season random4() {
    return VALUES1.get(RANDOM.nextInt(SIZE));

  public static Season random5() {
    return VALUES2[new Random().nextInt(SIZE)];

  public static Season random6() {
    return VALUES2[RANDOM.nextInt(SIZE)];

with the following test harness:

public class Temporary {
  private static final int COUNT = 30000000;

  public static void main(String args[]) {
    ScheduledExecutorService executor = Executors.newSingleThreadScheduledExecutor();
    MemoryMonitor monitor = new MemoryMonitor();
    executor.scheduleAtFixedRate(monitor, 0, 10, TimeUnit.MILLISECONDS);
    int[] tally = new int[4];
    long baseline = usedMemory();
    long start = System.nanoTime();
    for (int i=0; i<COUNT; i++) {
    long end = System.nanoTime();
    try {
      executor.awaitTermination(Long.MAX_VALUE, TimeUnit.NANOSECONDS);
    } catch (InterruptedException e) {
      throw new RuntimeException(e);
    long memoryUsed = monitor.peak() - baseline;
    for (Season season : Season.values()) {
      System.out.printf("%s: %,d%n", season, tally[season.ordinal()]);
    System.out.printf("%nCompleted %,d iterations in %,.3f seconds using %,d bytes%n",
        COUNT, ((end - start) / 1000000) / 1000.0d, memoryUsed

  private static long usedMemory() {
    Runtime runtime = Runtime.getRuntime();
    return runtime.totalMemory() - runtime.freeMemory();

  private static void waitForEnter() {
    try {
      new BufferedReader(new InputStreamReader(;
    } catch (IOException e) {


public class MemoryMonitor implements Runnable {
  private final Runtime runtime = Runtime.getRuntime();
  private final List<Long> usage = new ArrayList<Long>();

  public void run() {
    usage.add(runtime.totalMemory() - runtime.freeMemory());

  public List<Long> usage() {
    return usage;

  public long peak() {
    return Collections.max(usage);

with each method being run in turn.

The Results

Method Run time (seconds) Peak Memory Usage (bytes)
random1 9.746 681,288
random2 5.914 665,592
random3 5.123 669,408
random4 1.476 18,376
random5 4.593 661,368
random6 1.056 18,376

From this we can draw several conclusions:

  1. Creating a Random on every invocation is fair (in distribution terms) but has a high cost in temporary objects and CPU time (by a factor of 2-5);
  2. The garbage collector is working as both the arrays and the temporary Random objects contribute to the memory usage but Java is (partially) handling both being created. What this probably means is that both created is triggering a GC;
  3. Using a static copy of the enum values is 2-5x as fast;
  4. A static array copy is about 20-40% quicker than a static List copy; and
  5. The more optimized version uses 30x less memory and runs 10x quicker than the least optimized version.


For significant use of an enum’s values() method, it’s a no brainer: create and use a static copy instead. It’s faster and uses way less memory. On non-trivial applications it will also mean less memory fragmentation and less (possibly expensive) GCs, which is a significant issue with high-usage Web applications.

Hard Numbers on Stackoverflow Careers

This is a follow-up to Joel Inc., Stackoverflow Careers and Jumping Sharks, posted late last week.

Joel posted Stack Stats this week in which he demonstrates the correlation between Stackoverflow reputation and Careers take-up.

Now the exact meaning of this graph isn’t listed. Is that 30% of users with 50,000 reputation and above have submitted CVs? Let’s assume that it is.

Reputation Percentage Users in Range # of CVs
1,000 8% 3,257 261
2,000 10% 1,196 120
3,000 11% 640 70
4,000 12% 373 45
5,000 13% 244 32
6,000 13.5% 153 21
7,000 14% 112 16
8,000 15% 95 14
9,000 16% 48 8
10,000 17% 220 37
15,000 19.5% 79 15
20,000 22% 58 12
30,000 26% 28 7
50,000 30% 15 5
Total 10.2% 6,518 663

So we have 145 employers (as of 15 Dec 2009) and 663 job seekers of the 6,518 in the sample representing a percentage take-up of 10.2%.

I would guess that the vast majority of those would’ve paid $29 for 3 years so these ~700 uses account for $21,000 revenue over 3 years.

Is this kind of barrier—charging job seekers—really worth that kind of revenue stream?

Of course the hope is both that:

  1. The number of candidates will substantially grow; and
  2. Many (or most) of them will convert to paying $99/year in 3 years.

Three years from now I would consider it optimistic that the users matching the profile might number 40,000 instead of 10 to 25,000. That won’t accurately reflect the natural attrition rate either (there are some users who have already become basically inactive).

Considering that not all users will be looking for work at the same time (assuming Goldman Sachs’ next crackpot house of cards hasn’t come tumbling down yet), it’s hard to imagine the take-up rate being higher than 15-20% and that’s being optimistic.

So if everything goes well 10,000 people are paying $99/year. $1 million a year—basically money for nothing—is nothing to sneeze at. I can’t see it happening however.

Even if it does, it’s questionable whether this is the critical mass required to attract employers. I guess time will tell.

Joel Inc., Stackoverflow Careers and Jumping Sharks

Joel Spolsky is a legend in the programming world. His blog—Joel on Software—is the most popular and well-known programming blog. In mid-2008, Joel and Jeff Atwood—of Coding Horror fame—launched Stackoverflow, a free site for asking programming questions.

Stackoverflow is clearly a success but the sister sites haven’t fared nearly as well. Recently Jeff and Joel launched Stackoverflow Careers, a site for programmers to find jobs and employers to find programmers.

Stackoverflow Careers may just be a bridge too far.

Let’s Talk About… Joel

Joel on Software was the first blog I ever read. I read it before anyone really knew what a blog was. Controlling Your Environment Makes You Happy was one of those things I read that completely changed my perspective. How Microsoft Lost the API War I consider to be almost prophetic in its predictions regarding the then-Longhorn now-Vista boondoggle and desktop bloodletting by Web applications.

But something isn’t right in the Land of Joel.

In the late 90s during a brief flirtation with strenuous physical activity, I learnt to SCUBA dive. I went to one of these courses that was an evening of instruction of the evils of nitrogen, a weekend in the pool and then a weekend in the ocean. This was a PADI course and is very much the consumer-grade diving education and I state that as a simple observation not a judgement or accusation. At the other end of the spectrum is NAUI.

PADI is all about selling you stuff—gear, courses, whatever. A friend remarked to me that PADI stood for Put Another Dollar In.

NAUI on the other hand is much more highly regarded but less prolific. It is a not-for-profit organisation. Whereas some accuse PADI of dumbing down SCUBA training, nothing of the sort is levelled against NAUI. That same friend said NAUI stands for Not Another Untrained Idiot.

What does this have to do with Joel? Whereas Joel was once the NAUI-like font of wisdom, now it just seems like he’s trying to sell me stuff.

Jumping the Shark

Of course I’m not the first to articulate this.

In recent times Joel has taken quite a bashing, for example Joel Spolsky, Snake-Oil Salesman and Sten Anderson’s I Heart Joel on Software.

Sten’s comments are particularly interesting because what he says is true: all Joel’s endless talk about great programmers is thinly disguised disdain for the 99% of us that didn’t go to MIT, Stanford, UW, Yale, Harvard or UPenn.

Amusingly, Jeff Atwood posted several years ago Has Joel Spolsky Jumped the Shark? going so far as to say:

I reject this new, highly illogical Joel Spolsky. I demand the immediate return of the sage, sane, wise Joel Spolsky of years past. But maybe it's like wishing for a long-running television show to return to its previous glories.

I guess he got over it.

Side note: Jeff was responding to Language Wars (emphasis added by Jeff):

FogBugzis written in Wasabi, a very advanced, functional-programming dialect of Basic with closures and lambdas and Rails-like active records that can be compiled down to VBScript, JavaScript, PHP4 or PHP5. Wasabi is a private, in-house language written by one of our best developers that is optimized specifically for developing FogBugz; the Wasabi compiler itself is written in C#.

I admit it: I love a good rant. And not just ranting for ranting’s sake but a rant with a message, an essential kernel of truth, a pearl of wisdom. It’s hard to forget Zed Shaw’s now-infamous (albeit retracted) Rails is a Ghetto rant of nearly two years ago. Yesterday I read Giles Bowkett’s Blogs are Godless Communist Bullshit. It’s long but entertaining and absolutely worth reading.

But is all this criticism justified?

Firstly, some background.

IT Recruitment

In Europe and Australia programmers (and other IT professionals) are found in three ways:

  1. Direct recruitment by the employer. This usually means big employers who have dedicated HR departments to filter out CVs, book interviews and so on. Such candidates will most likely become salaried employees of the company;
  2. Word of mouth; and
  3. Through recruitment agencies.

In my experience recruitment agents are loathed by IT workers (eg Why is IT recruitment so bad?). Most of the time they’re utterly clueless (I have in all seriousness been asked “I see you have 7 years of Java experience but do you have any J2SE experience?”). Horror stories are legion. IT recruitment in London in particular is a soul-destroying experience.

Recruiters will fill positions on a permanent (salaried) or contract (paid by the hour, day, week or month) basis.

The recruiter will earn a fee that is typically around 10-15% of the candidate’s annual salary upon successfully filling the position. If the employee leaves in the probationary period (typically three months) some or all of that will be refunded.

With contractors the recruiter will typically earn a margin of 10-25% (or even higher) on top of the contractor’s rate either for a fixed term (eg it scales down after a year) or in perpetuity. Expat contractors typically have criminally high margins put on top of what they earn, at least initially.

So recruitment is expensive.

Compare that to placing ads on job boards will typically cost hundreds of dollars (eg FAQ and Monster Job Posting) and last weeks. One ad can potentially fill multiple positions. Employers will typically keep CVs on file and getting contacted some time after applying is not uncommon. So ads can be effective although there can be a lot of chaff.

IT recruitment is broken so there’s definitely room for a solution.

Stackoverflow Careers

Careers is another site hoping to capitalize on the success of Stackoverflow. Programmers routinely demonstrate the ability to self-organize, which I think explains—at least in part—its success. Computer science is also a centuries-old. Yes I said “centuries old”. So before some reddit lurker points out computers were born in the mid-twentieth century, I suggest you consult the Timeline of computing 2400 BC–1949 and the work of Charles Babbage and others.

The latest money-making venture is Stackoverflow Careers, heavily cross-promoted by Jeff Atwood (Introducing Stack Overflow Careers and Stack Overflow Careers: Amplifying Your Awesome) and Joel (Upgrade your career and Programmer search engine) as well as echoes in the blogosphere.

Despite the success in terms of audience size (Joel in his Google Tech Talk claims a ~30% programmer share, which is huge if true), programmers are a hard bunch to monetize (see Our Amazon Advertising Experiment). Careers is the latest incarnation.

It’s free to have a public CV but having a private CV costs money (allegedly $99/year after 31st December but don’t be surprised if that changes). The private CV is searchable by employers and allows (as Jeff/Joel put it) “deep” integration with Stackoverflow.

The employers are paying too anywhere from $500 for a week to $5,000 for a year (see the FAQ).

Not cheap. So what are we getting for our money?

The Hollywood Analogy

Joel claims:

In Hollywood, studios who need talent browse through portfolios, find two or three possible candidates, and make them great offers. And then they all try to outdo each other providing plush work environments and great benefits.

Make no mistake: you’re being sold something here. The allure of stardom is deliberate bait. Giles succinctly sums this up:

This last part is laugh-out-loud funny. That's not how Hollywood works. I'm an actor, I've been studying acting for years, and I know award-winning actors who still have to go out on auditions like everybody else. You might wonder how a newbie like me, with nothing but Cop #3 in a student film to his credit, can claim to know award-winning, seasoned professionals. It's simple: because they have to go on auditions like everybody else.

I will take one issue with what Giles said:

Robert Downey Jr. had to fight like hell to get the lead role in Iron Man.

Yes, but there’s a reason for that. He had a serious drug problem and any studio is going to balk at betting a billion dollar franchise on a cokehead.

But I digress.

Here’s another difference: actors are basically the most flexible labour market in the world. They go where the work is. The film shoots for 40 weeks in Siberia? Fine, no problem. Actors go where the films are.

Programmers on the other hand are not nearly as flexible. Programmers are regular workers. We have families, friends, mortgages and so on. Sure we might move from St Louis to San Francisco for a job but we also might not. I think it’s safe to say that more often than not, we’re not looking to move across country. Hell, we’ll even turn down a job if it’s in the wrong part of the same city.

Imagine how far you’d get as an actor if you said “I’ve love to work on your TV show but the studio is in Burbank and commute from Radondo Beach is a bitch so i think I’ll pass.” (only knowing LA to change planes I make no apologies for any gross errors in LA geography I may have just made).

So instead of there being a handful of job markets for actors there are probably 100 or more for programmers.

So What’s In It For Me?

From the FAQ:

If you are seeking employment, we do require a modest annual payment to file your CV. Filing your CV makes it eligible to appear in searches by hiring managers via our private search interface. This fee allows us to ensure employers that everyone they find is actively looking for a job.

Isn’t the fact that I’ve filled out a CV and ticked a box that says I’m looking for work sufficient? Apparently not.

Consider Finding Great Developers:

The great software developers, indeed, the best people in every field, are quite simply never on the market.

So the target market seems to be those developers who think they’re great developers but actually aren’t. If they were they wouldn’t be looking. I get it: everyone is better than average.

Giles sums this up:

The number one rule of the con: you can't con an honest man … Try to get something for nothing, just because Joel Spolsky said you could? You're going to get burned.

I should point out that I signed up in the beta. I was under no illusions however (then again, who ever thinks they are?). The chances of an employer looking in my remote backwater are next to nil but I figured at $30, at worst I was out two lunches from Nando's.

And If I’m A Hiring Manager?

Approximately 6,500 Stackoverflow users have 1,000 reputation or more. This is an arbitrary number choice but the point is this: integration with Stackoverflow only adds value if you’ve contributed a sufficiently large number of answers to mine. Go up to 2,000 rep and you’re down to less than 3,200 users. And so on.

Let’s be optimistic and say the potential audience for whom Stackoverflow will add value to their CV is 10,000. A number of these can be eliminated as being students, retired, incapable of working (eg disability or serious prolonged injury) or simply not looking for work.

Joel claims:

But Stack Overflow Careers doesn’t have to be massive. It’s not for the 5.2 million people who visit Stack Overflow; it’s for the top 25,000 developers who participate actively.

Want to know what the 25,000th user looks like?

I mean no disrespect to these people but “participate actively”?

Take careful note of the language too: 25,000 from 5.2 million? Hell, you’re already the top half of one percent! You’re elite, positively l33t! Uh huh.

Crunching the Numbers

There are at least 100 distinct geographical job markets for an employer. If you’re lucky 10% of the pool is accessible to you either by being in the right place or willing to relocate.

Of those 10%, maybe 10% have the right skills. The importance of programming languages is definitely overstated by (typically clueless) HR departments and recruiters. It’s also true that good developers can program in anything (given sufficient time) but not all languages are interchangeable in all situations. I would consider a Java Web developer to be largely interchangeable with an ASP.NET C# Web developer (in that there is sufficient crossover to enable a sufficiently speedy transition) but I wouldn’t hire a Ruby programmer to do C programming for microcontrollers and embedded devices. The transition from unmanaged (eg C/C++) to managed (eg C#/.Net) code can be steep enough.

Of this reduced pool, how many have the right experience? The more experienced you get as a developer, generally the more important domain knowledge becomes. I wouldn’t hire a mobile telephony architect to design a system for market-making options on commodities futures because you’d spend 6 months explaining bid/ask, spreads, what a future is, what an option is, in-the-money, out-the-money, out-the-money, short, long, contango, volatility, Black-Scholes… the list goes on.

Of the remaining few who has the right amount of experience? You wouldn’t hire a fresh college grad to mentor junior developers.

Now you’ve got a short list (“short” being the operative word) consider how many are available?

And you haven’t even interviewed anybody yet!

So if you optimistically assume that 10,000 people sign up for Careers, chances are you’re down to less than five. Of those, how many are seriously looking? They’re paying by the year so why not have your CV out there just in case?

Don’t be fooled, paying to file your CV doesn’t ensure you’re seriously looking. The only thing it ensures is that you’re a revenue stream.

Critical Mass

Matching candidates to employers is low probability. The number who fit the profile is probably 1 in 1,000 or even less.

So of the 10 to 25 thousand relevant potential candidates, some percentage will actually be looking for work. Of that percentage, a smaller percentage will pay to be seen by employers, less than might otherwise be seen if the service was free (for job seekers). I expect that number to be 2,000 or less and that number is, in my opinion, inflated by the cheap beta registration.

So an employer is going to pay big bucks—much more than a typical job ad—to reach a much smaller target audience?

People will pay money if they are getting value for money. Paying $15,000 to a recruiter to find you a programmer is cheap because the recruiter is doing most of the legwork and assuming a large part of the risk (in that they don’t typically get paid if you don’t find someone you like). Job ads are cheap because they may reach tens or even hundreds of thousands of candidates.

It’s like Careers is charging as if it’s already a proven success.

Things like this work on the principle of critical mass. Take eBay. People buy on eBay because there are things to be bought. People sell on eBay because people will buy them. Without either group the site fails. A job board is no different. People go to them because they have jobs they want. Companies advertise on them because they reach the right audience.

So what job board—and let’s be honest; that’s what it is—is going to survive by restricting itself to 10 to 25 thousand candidates globally? Perhaps Jeff and Joel are thinking that it will be so successful that everyone else will just have to sign up anyway.

Good luck with that business strategy.

Is It Legal?

I have to wonder if anyone has bothered to ask this yet. Consider Job seekers are hit by illegal fees. Not just in the United Arab Emirates is it illegal to charge job seekers. Also, How job seekers can best use recruitment agencies (emphasis added):

Recruitment agencies make their money by charging employers a fee for a permanent hire or an hourly or daily margin on a temporary placement. It is illegal to charge job seekers a fee for finding them work.

That’s for Australia. The point here is that Jeff and Joel probably need to be very careful about how they define the Careers site if they don’t want to run afoul of laws set up to protect the unemployed from unscrupulous practices.

Smoke and Mirrors

From the FAQ:

If you are seeking employment, we do require a modest annual payment to file your CV. Filing your CV makes it eligible to appear in searches by hiring managers via our private search interface. This fee allows us to ensure employers that everyone they find is actively looking for a job.

We’re being sold something here.

Also consider Upgrade your career:

Employers can see how good you are at communicating, …


… how well you explain things, …


… how well you understand the tools that you’re using, …

Er… OK.

… and generally, if you’re a great developer or not.

Whoa. Sorry, but the fact that I know how that parsing HTML with regular expressions is retarded, I can explain how to add a jQuery click() handler and that not sanitizing user input to SQL statements is idiotic doesn’t make me a great developer. It means anything from I like teaching to I’m narcissistic enough to like hearing the sound of my own voice (virtually speaking), perhaps both.

And let’s not forget that all of this can be established by simply including a URL to your Stackoverflow profile on your CV.


The numbers just don’t add up on this one. My only question is how long it’ll be before that sinks in and the model changes. With so much free choice, its just not viable to charge job seekers while severely limiting the candidate pool for employers while charging them an arm and a leg for information they can get from a URL.

Google Wave Invites to Give Away

I've got a dozen or so of these I don't really need. Drop me a line and I'll send you one, first in first served until they run out.

Programming Puzzles, Chess Positions and Huffman Coding

This week Andrew Rollings asked the question ProgrammerPuzzle: Encoding a chess board state throughout a game on StackOverflow.

Now I’ll admit that I love this kind of question. I’m not really such a big fan of Code Golf as that’s an exercise in writing terse, unreadable code (although some of the solutions have been brilliant). But this Chess problem is the sort of thing that will allow a programmer to demonstrate his or her mental acuity and problem solving ability (or the lack thereof).

The Problem

What is the most space-efficient way you can think of to encode the state of a chess game (or subset thereof)? That is, given a chess board with the pieces arranged legally, encode both this initial state and all subsequent legal moves taken by the players in the game.

This image illustrates the starting Chess position. Chess occurs on an 8x8 board with each player starting with an identical set of 16 pieces consisting of 8 pawns, 2 rooks, 2 knights, 2 bishops, 1 queen and 1 king as illustrated here:

Positions are generally recorded as a letter for the column followed by the number for the row so White’s queen is at d1. Moves are most often stored in algebraic notation, which is unambiguous and generally only specifies the minimal information necessary. Consider this opening:

1. e4 e5
2. Nf3 Nc6
3. …

which translates to:

  1. White moves king’s pawn from e2 to e4 (it is the only piece that can get to e4 hence “e4”);
  2. Black moves the king’s pawn from e7 to e5;
  3. White moves the knight (N) to f3;
  4. Black moves the knight to c6.

The board looks like this:

An important ability for any programmer is to be able to correctly and unambiguously specify the problem.

So what’s missing or ambiguous? A lot as it turns out.

Board State vs Game State

The first thing you need to determine is whether you’re storing the state of a game or the position of pieces on the board. Encoding simply the positions of the pieces is one thing but the problem says “all subsequent legal moves”. The problem also says nothing about knowing the moves up to this point. That’s actually a problem as I’ll explain.


The game has proceeded as follows:

1. e4 e5
2. Nf3 Nc6
3. Bb5 a6
4. Ba4 Bc5

The board looks as follows:

White has the option of castling. Part of the requirements for this are that the king and the relevant rook can never have moved, so whether the king or either rook of each side has moved will need to be stored. Obviously if they aren’t on their starting positions, they have moved otherwise it needs to be specified.

There are several strategies that can be used for dealing with this problem.

Firstly, we could store an extra 6 bits of information (1 for each rook and knight position) to indicate whether that piece had moved. We could streamline this by only storing a bit for one of these six squares if the right piece happens to be in it. Alternatively we could treat each unmoved piece as another piece type so instead of 6 piece types on each side (pawn, rook, knight, bishop, queen and king) there are 8 (adding unmoved rook and unmoved king).

En Passant

Another peculiar and often-neglected rule in Chess is En Passant.

The game has progressed.

1. e4 e5
2. Nf3 Nc6
3. Bb5 a6
4. Ba4 Bc5
5. O-O b5
6. Bb3 b4
7. c4

Black’s pawn on b4 now has the option of moving his pawn on b4 to c3 taking the White pawn on c4. This only happens on the first opportunity meaning if Black passes on the option now he can’t take it next move. So we need to store this.

If we know the previous move we can definitely answer if En Passant is possible. Alternatively we can store whether each pawn on its 4th rank has just moved there with a double move forward. Or we can look at each possible En Passant position on the board and have a flag to indicate whether its possible or not.


It is White’s move. If White moves his pawn on h7 to h8 it can be promoted to any other piece (but not the king). 99% of the time it is promoted to a Queen but sometimes it isn’t, typically because that may force a stalemate when otherwise you’d win. This is written as:

56. h8=Q

This is important in our problem because it means we can’t count on there being a fixed number of pieces on each side. It is entirely possible (but incredibly unlikely) for one side to end up with 9 queens, 10 rooks, 10 bishops or 10 knights if all 8 pawns get promoted.


When in a position from which you cannot win your best tactic is to try for a stalemate. The most likely variant is where you cannot make a legal move (usually because any move when put your king in check). In this case you can claim a draw. This one is easy to cater for.

The second variant is by threefold repetition. If the same board position occurs three times in a game (or will occur a third time on the next move), a draw can be claimed. The positions need not occur in any particular order (meaning it doesn’t have to the same sequence of moves repeated three times). This one greatly complicates the problem because you have to remember every previous board position. If this is a requirement of the problem the only possible solution to the problem is to store every previous move.

Lastly, there is the fifty move rule. A player can claim a draw if no pawn has moved and no piece has been taken in the previous fifty consecutive moves so we would need to store how many moves since a pawn was moved or a piece taken (the latest of the two. This requires 6 bits (0-63).

Whose Turn Is It?

Of course we also need to know whose turn it is and this is a single bit of information.

Two Problems

Because of the stalemate case, the only feasible or sensible way to store the game state is to store all the moves that led to this position. I’ll tackle that one problem. The board state problem will be simplified to this: store the current position of all pieces on the board ignoring castling, en passant, stalemate conditions and whose turn it is.

Piece layout can be broadly handled in one of two ways: by storing the contents of each square or by storing the position of each piece.

Simple Contents

There are six piece types (pawn, rook, knight, bishop, queen and king). Each piece can be White or Black so a square may contain one of 12 possible pieces or it may be empty so there are 13 possibilities. 13 can be stored in 4 bits (0-15) So the simplest solution is to store 4 bits for each square times 64 squares or 256 bits of information.

The advantage of this method is that manipulation is incredibly easy and fast. This could even be extended by adding 3 more possibilities without increasing the storage requirements: a pawn that has moved 2 spaces on the last turn, a king that hasn’t moved and a rook that hasn’t moved, which will cater for a lot of previously mentioned issues.

But we can do better.

Base 13 Encoding

It is often helpful to think of the board position as a very large number. This is often done in computer science. For example, the halting problem treats a computer program (rightly) as a large number.

The first solution treats the position as a 64 digit base 16 number but as demonstrated there is redundancy in this information (being the 3 unused possibilities per “digit”) so we can reduce the number space to 64 base 13 digits. Of course this can’t be done as efficiently as base 16 can but it will save on storage requirements (and minimizing storage space is our goal).

In base 10 the number 234 is equivalent to 2 x 102 + 3 x 101 + 4 x 100.

In base 16 the number 0xA50 is equivalent to 10 x 162 + 5 x 161 + 0 x 160 = 2640 (decimal).

So we can encode our position as p0 x 1363 + p1 x 1362 + ... + p63 x 130 where pi represents the contents of square i.

2256 equals approximately 1.16e77. 1364 equals approximately 1.96e71, which requires 237 bits of storage space. That saving of a mere 7.5% comes at a cost of significantly increased manipulation costs.

Variable Base Encoding

In legal boards certain pieces can’t appear in certain squares. For example, pawns cannot occur at in the first or eighth ranks, reducing the possibilities for those squares to 11. That reduces the possible boards to 1116 x 1348 = 1.35e70 (approximately), requiring 233 bits of storage space.

Actually encoding and decoding such values to and from decimal (or binary) is a little more convoluted but it can be done reliably and is left as an exercise to the reader.

Variable Width Alphabets

The previous two methods can both be described as fixed-width alphabetic encoding. Each of the 11, 13 or 16 members of the alphabet is substituted for another value. Each “character” is the same width but the efficiency can be improved when you consider that each character is not equally likely.

Consider Morse code (pictured left). Characters in a message are encoded as a sequence of dashes and dots. Those dashes and dots are transferred over radio (typically) with a pause between them to delimit them.

Notice how the letter E (the most common letter in English) is a single dot, the shortest possible sequence, whereas Z (the least frequent) is two dashes and two beeps.

Such a scheme can significantly reduce the size of an expected message but comes at the cost of increasing the size of a random character sequence.

It should be noted that Morse code has another inbuilt feature: dashes are as long as three dots so the above code is created with this in mind to minimize the use of dashes. Since 1s and 0s (our building blocks) don’t have this problem, it’s not a feature we need to replicate.

Lastly, there are two kinds of rests in Morse code. A short rest (the length of a dot) is used to distinguish between dots and dashes. A longer gap (the length of a dash) is used to delimit characters.

So how does this apply to our problem?

Huffman Coding

There is an algorithm for dealing with variable length codes called Huffman coding. Huffman coding creates a variable length code substitution, typically uses expected frequency of the symbols to assign shorter values to the more common symbols.

In the above tree, the letter E is encoded as 000 (or left-left-left) and S is 1011. It should be clear that this encoding scheme is unambiguous.

This is an important distinction from Morse code. Morse code has the character separator so it can do otherwise ambiguous substitution (eg 4 dots can be H or 2 Is) but we only have 1s and 0s so we choose an unambiguous substitution instead.

Below is a simple implementation:

private static class Node {
  private final Node left;
  private final Node right;
  private final String label;
  private final int weight;

  private Node(String label, int weight) {
    this.left = null;
    this.right = null;
    this.label = label;
    this.weight = weight;

  public Node(Node left, Node right) {
    this.left = left;
    this.right = right;
    label = "";
    weight = left.weight + right.weight;

  public boolean isLeaf() {
    return left == null && right == null;

  public Node getLeft() {
    return left;

  public Node getRight() {
    return right;

  public String getLabel() {
    return label;

  public int getWeight() {
    return weight;

private static class WeightComparator implements Comparator<Node> {
  public int compare(Node o1, Node o2) {
    if (o1.getWeight() == o2.getWeight()) {
      return 0;
    } else {
      return o1.getWeight() < o2.getWeight() ? -1 : 1;

private static class PathComparator implements Comparator<String> {
  public int compare(String o1, String o2) {
    if (o1 == null) {
      return o2 == null ? 0 : -1;
    } else if (o2 == null) {
      return 1;
    } else {
      int length1 = o1.length();
      int length2 = o2.length();
      if (length1 == length2) {
        return o1.compareTo(o2);
      } else {
        return length1 < length2 ? -1 : 1;

private final static List<String> COLOURS;
private final static Map<String, Integer> WEIGHTS;

static {
  List<String> list = new ArrayList<String>();
  COLOURS = Collections.unmodifiableList(list);
  Map<String, Integer> map = new HashMap<String, Integer>();
  for (String colour : COLOURS) {
    map.put(colour + " " + "King", 1);
    map.put(colour + " " + "Queen", 1);
    map.put(colour + " " + "Rook", 2);
    map.put(colour + " " + "Knight", 2);
    map.put(colour + " " + "Bishop", 2);
    map.put(colour + " " + "Pawn", 8);
  map.put("Empty", 32);
  WEIGHTS = Collections.unmodifiableMap(map);

public static void main(String args[]) {
  PriorityQueue<Node> queue = new PriorityQueue<Node>(WEIGHTS.size(), new WeightComparator());
  for (Map.Entry<String, Integer> entry : WEIGHTS.entrySet()) {
    queue.add(new Node(entry.getKey(), entry.getValue()));
  while (queue.size() > 1) {
    Node first = queue.poll();
    Node second = queue.poll();
    queue.add(new Node(first, second));
  Map<String, Node> nodes = new TreeMap<String, Node>(new PathComparator());
  addLeaves(nodes, queue.peek(), "");
  for (Map.Entry<String, Node> entry : nodes.entrySet()) {
    System.out.printf("%s %s%n", entry.getKey(), entry.getValue().getLabel());

public static void addLeaves(Map<String, Node> nodes, Node node, String prefix) {
  if (node != null) {
    addLeaves(nodes, node.getLeft(), prefix + "0");
    addLeaves(nodes, node.getRight(), prefix + "1");
    if (node.isLeaf()) {
      nodes.put(prefix, node);

One possible output is:

  White Black
Empty 0
Pawn 110 100
Rook 11111 11110
Knight 10110 10101
Bishop 10100 11100
Queen 111010 111011
King 101110 101111

For a starting position this equates to 32 x 1 + 16 x 3 + 12 x 5 + 4 x 6 = 164 bits.

State Difference

Another possible approach is to combine the very first approach with Huffman coding. This is based on the assumption that most expected Chess boards (rather than randomly generated ones) are more likely than not to, at least in part, resemble a starting position.

So what you do is XOR the 256 bit current board position with a 256 bit starting position and then encode that (using Huffman coding or, say, some method of run length encoding). Obviously this will be very efficient to start with (64 0s probably corresponding to 64 bits) but increase in storage required as the game progresses.

Piece Position

As mentioned, another way of attacking this problem is to instead store the position of each piece a player has. This works particularly well with endgame positions where most squares will be empty (but in the Huffman coding approach empty squares only use 1 bit anyway).

Each side will have a king and 0-15 other pieces. Because of promotion the exact make up of those pieces can vary enough that you can’t assume the numbers based on the starting positions are maxima.

The logical way to divide this up is store a Position consisting of two Sides (White and Black). Each Side has:

  • A king: 6 bits for the location;
  • Has pawns: 1 (yes), 0 (no);
  • If yes, number of pawns: 3 bits (0-7+1 = 1-8);
  • If yes, the location of each pawn is encoded: 45 bits (see below);
  • Number of non-pawns: 4 bits (0-15);
  • For each piece: type (2 bits for queen, rook, knight, bishop) and location (6 bits)

As for the pawn location, the pawns can only be on 48 possible squares (not 64 like the others). As such, it is better not to waste the extra 16 values that using 6 bits per pawn would use. So if you have 8 pawns there are 488 possibilities, equalling 28,179,280,429,056. You need 45 bits to encode that many values.

That’s 105 bits per side or 210 bits total. The starting position is the worst case for this method however and it will get substantially better as you remove pieces.

It should be pointed out that there are less than 488 possibilities because the pawns can’t all be in the same square  The first has 48 possibilities, the second 47 and so on. 48 x 47 x … x 41 = 1.52e13 = 44 bits storage.

You can further improve this by eliminating the squares that are occupied by other pieces (including the other side) so you could first place the white non-pawns then the black non-pawns, then the white pawns and lastly the black pawns. On a starting position this reduces the storage requirements to 44 bits for White and 42 bits for Black.

Combined Approaches

Another possible optimization is that each of these approaches has its strength and weaknesses. You could, say, pick the best 4 and then encode a scheme selector in the first two bits and then the scheme-specific storage after that.

With the overhead that small, this will by far be the best approach.

Game State

I return to the problem of storing a game rather than a position. Because of the threefold repetition we have to store the list of moves that have occurred to this point.


One thing you have to determine is are you simply storing a list of moves or are you annotating the game? Chess games are often annotated, for example:

17. Bb5!! Nc4?

White’s move is marked by two exclamation points as brilliant whereas Black’s is viewed as a mistake. See Chess punctuation.

Additionally you could also need to store free text as the moves are described.

I am assuming that the moves are sufficient so there will be no annotations.

Algebraic Notation

We could simply store the the text of the move here (“e4”, “Bxb5”, etc). Including a terminating byte you’re looking at about 6 bytes (48 bits) per move (worst case). That’s not particularly efficient.

The second thing to try is to store the starting location (6 bits) and end location (6 bits) so 12 bits per move. That is significantly better.

Alternatively we can determine all the legal moves from the current position in a predictable and deterministic way and state which we’ve chosen. This then goes back to the variable base encoding mentioned above. White and Black have 14 possible moves each on their first move, more on the second and so on.


There is no absolutely right answer to this question. There are many possible approaches of which the above are just a few.

What I like about this and similar problems is that it demands abilities important to any programmer like considering the usage pattern, accurately determining requirements and thinking about corner cases.

Chess positions taken as screenshots from Chess Position Trainer.

Reviewing Project Lombok or the Right Way to Write a Library

You could consider this a parody of my own Spring Batch or How Not to Design an API. Credit where credit is due however and this brings me to Project Lombok.

What is Project Lombok?

Every Java developer knows that Java involves writing a lot of boilerplate code. Create a value object (or “data transfer object”) and you might have a behaviour-less class with a dozen properties. Those twelve statements are the only important thing in the class. After that you create a hundred lines of getters, setters (if not immutable), an equals/hashCode and a toString method, possibly all IDE generated.

This is an error-prone process, even when generated by an IDE. An IDE won’t typically tell you if, say, you add another data member and don’t regenerate your equals and hashCode methods.

Project Lombok seeks to greatly reduce the need for such boilerplate by using annotations to automatically generate it so you don’t have to. This fits in well with my own philosophy:

Lines of Code Are The Enemy

This is not a new or revolutionary idea. Decades ago it was noted that the number of lines of code was an important metric for both programmer productivity and expected bugs, meaning that whether you were dealing with assembly language or Erlang the metrics were roughly the same. This in part explains the steady move towards higher level languages as the more you can get done in one line of code the better.

Bugs per lines of code addresses this point, referencing Code Complete and other sources.

This is why I think things like first-class properties and closures are advantages in the .Net world (over Java): because they can do the same thing with less code, even if you can do the same thing with IDE-generated getters and setters and anonymous inner classes (respectively).

What Can Project Lombok Do?

Project Lombok has seven annotations for minimizing boilerplate code.

@Getter / @Setter

Never write public int getFoo() {return foo;} again.
No need to start a debugger to see your fields: Just let lombok generate a
toString for you!
Equality made easy: Generates hashCode and equals implementations
from the fields of your object.
All together now: A shortcut for @ToString, @EqualsAndHashCode,
@Getter on all fields, and @Setter on all non-final fields. You even get
a free constructor to initialize your final fields!
Automatic resource management: Call your close() methods safely with
no hassle.
synchronized done right: Don't expose your locks.
To boldly throw checked exceptions where no one has thrown them before!

The effect can be dramatic. From @Data you can replace this:

import java.util.Arrays;

public class DataExample {
    private final String name;
    private int age;
    private double score;
    private String[] tags;

    public DataExample(String name) { = name;

    public String getName() {
        return name;

    void setAge(int age) {
        this.age = age;

    public int getAge() {
        return age;

    public void setScore(double score) {
        this.score = score;

    public double getScore() {
        return score;

    public String[] getTags() {
        return tags;

    public void setTags(String[] tags) {
        this.tags = tags;

    public String toString() {
        return "DataExample(" + name + ", " + age + ", " + score + ", " + Arrays.deepToString(tags) + ")";

    public boolean equals(Object o) {
        if (o == this) return true;
        if (o == null) return false;
        if (o.getClass() != this.getClass()) return false;
        DataExample other = (DataExample) o;
        if (name == null ? != null : !name.equals( return false;
        if (age != other.age) return false;
        if (, other.score) != 0) return false;
        if (!Arrays.deepEquals(tags, other.tags)) return false;
        return true;

    public int hashCode() {
        final int PRIME = 31;
        int result = 1;
        final long temp1 = Double.doubleToLongBits(score);
        result = (result * PRIME) + (name == null ? 0 : name.hashCode());
        result = (result * PRIME) + age;
        result = (result * PRIME) + (int) (temp1 ^ (temp1 >>> 32));
        result = (result * PRIME) + Arrays.deepHashCode(tags);
        return result;

    public static class Exercise {
        private final String name;
        private final T value;

        private Exercise(String name, T value) {
   = name;
            this.value = value;

        public static  Exercise of(String name, T value) {
            return new Exercise(name, value);

        public String getName() {
            return name;

        public T getValue() {
            return value;

        public String toString() {
            return "Exercise(name=" + name + ", value=" + value + ")";

        public boolean equals(Object o) {
            if (o == this) return true;
            if (o == null) return false;
            if (o.getClass() != this.getClass()) return false;
            Exercise<?> other = (Exercise<?>) o;
            if (name == null ? != null : !name.equals( return false;
            if (value == null ? other.value != null : !value.equals(other.value)) return false;
            return true;

        public int hashCode() {
            final int PRIME = 31;
            int result = 1;
            result = (result * PRIME) + (name == null ? 0 : name.hashCode());
            result = (result * PRIME) + (value == null ? 0 : value.hashCode());
            return result;


import lombok.AccessLevel;
import lombok.Setter;
import lombok.Data;
import lombok.ToString;

public class DataExample {
    private final String name;
    private int age;
    private double score;
    private String[] tags;

    @ToString(includeFieldNames = true)
    @Data(staticConstructor = "of")
    public static class Exercise {
        private final String name;
        private final T value;


With Annotations? How?

Most developers experience with annotations involves:

  • Putting @Override in overriden methods;
  • Using @SuppressWarnings, often to disable an IDE warning about casting generic collections; and
  • Using API specific annotations in JPA< J2EE, Spring, etc (eg @Resource, @Transactional).

Far fewer developers have ever written an annotation. If you haven’t it’s worth having a read of Sun's Annotation Tutorial. Annotations can be used to basically generate code at compile-time. That’s how Project Lombok works.

Is This Kosher?

There is some debate about this. A certain school of thought believes that code should still compile and possibly even work without the source annotations present. Alternatively, the belief is that annotations shouldn’t be used as a substitute for language features.

Java 7 had debate over many new features, some changing the language. One is first-class properties, to avoid the boilerplate of getters and setters or provide them in a far terser manner, much like C#/.Net does. First-class properties didn’t make it into Java 7. Project Lombok gives a viable alternative.

Possibly more controversial is that you can use @SneakyThrows to throw checked exceptions without declaring them. This stokes the debate that is old as Java itself: are checekd exceptions a mistake? I view them as a failed experiment in software engineering.

So Project Lombok is somewhat controversial (some have even gone so far as to call it a “hack”) so perhaps the best thing to come out of it is the debate about Java and its future. Up until a couple of years ago Java was a hotbed for debate about software design and an incubator for new technologies, methodologies and architectures. Java (though the Spring framework) popularized dependency injection and inversion of control as well as the use of MVC in Web frameworks.

But after Java 5 Sun seems to have lost its way. It lost such luminaries as Joshua Bloch (to Google). Java 5 itself was a huge change and debate still rages about the complexity of Java generics and the wisdom of type erasure. So much so that closures for Java 7 were declined (but out of nowhere, it was announced closures are now coming to Java 7).

Microsoft has demonstrated leadership for pushing the .Net platform forwards to the point that Java is now playing catch-up (which wasn’t the case prior to .Net 3.0/3.5) whereas Java 7 was mired in debate and lacked leadership and vision from Sun about where to go next, complicated by grand misadventures like the stillborn JavaFX.

But anyway, perhaps all this debate will help reinvigorate a Java development community that seems to have given up.

What has Project Lombok Done Right?

Firstly, using Lombok with Maven is easy. I can’t emphasize enough how useful that is. There’s nothing more frustrating than digging around to find the right artifact(s) and/or repositories to get a particular library to work in Maven. Usually it’s not hard but sometimes it is. I have better things to do than try and figure out what someone should just mention in their project’s documentation.

Also the documentation isn’t super-extensive (as, say, Spring’s typically is) but it sure beats some others (eg typical Apache projects). At least there are examples of all the annotations.

Another thing I really like about it is that it generates toString(), hashCode() and equals() methods automatically but unlike some Apache Commons libraries, it doesn’t do it via reflection at run-time.

Lastly, Project Lombok is released under the highly permissive MIT license.

What Could Project Lombok Do Better?

Currently, Project Lombok only integrates (well) with Eclipse. Netbeans and IntelliJ users are out in the cold and support for those two IDEs seems anywhere from far off to never going to happen. I’m a huge fan of IntelliJ and frankly I wouldn’t use any other IDE given the choice. People tend to feel pretty strongly about their IDEs so currently Project Lombok’s lack of IntelliJ support is (unfortunately) a deal breaker.

I’ve had a bit of a look at the IntelliJ plug-in architecture. You can create plug-ins for new languages so all that’s really required is modification to the existing code for Java, assuming it is done with the same mechanism (which is a big and possibly incorrect assumption).

It seems like a serious limitation of IntelliJ’s internal compiler that it can’t handle compile-time annotations like this in a general sense. Surely one compile-time annotation is lie any other, right?

What Features Could Be Added?

Others have taken to this idea, despite the controversy. One such extension is Morbok, which uses the same idea to get rid of the boilerplate of creating loggers in classes.

It’s worth noting that @Cleanup should become obsolete with Java 7 (a year or more from now) with the advent of ARM (“Automatic Resource Management”). Again, this mimics another .Net feature, where such tedious try-catch-finally blocks are abbreviated using using() { … } blocks for anything that implements the IDisposable interface.

One limitation I noticed was that you can’t use @Data on enums. It would be useful to have an enum-specific version of this. One problem that Lombok could solve is the boilerplate around the use of values(). Enum.values() returns an array of the values. Because arrays aren’t immutable this needs to be copied each time you call it. This makes codes like this inefficient:

public enum Gender {
  MALE("M", "male"), FEMALE("F", "female");

  private final static Map LOOKUP;

  static {
    Map map = new HashMap();
    for (Gender gender : values()) {
      map.put(gender.code, gender);
    LOOKUP = Collections.unmodifiableMap(map);

  public static Gender find(String code) {
    return LOOKUP.get(code);

  private final String code;
  private final String description;

  private Gender(String code, String description) {
    this.code = code;
    this.description = description;

  public String getCode() { return code; }
  public String getDescription() { return description; }

That could easily be reduced to a couple of Lombok-style annotations. For example:

public enum Gender {
  MALE("M", "male"), FEMALE("F", "female");

  private final String code;
  private final String description;

  private Gender(String code, String description) {
    this.code = code;
    this.description = description;


I’m a big fan of Project Lombok. It’s my kind of library: lightweight, practical and doesn’t get in your face, in exactly the way that many Java Web frameworks aren’t. For example, Seam uses a flawed idea (component-oriented JSF) to solve a problem I don’t have. And for those of you that don’t think JSF is a failed experiment, it’s been about to take off for 7+ years now. At some point you just have to accept that it’s not going to work.

I personally come down on the side of the fence that accepts the utility of using annotations as a substitute for language features, especially considering how Java has stalled in that department in recent years (and Java 7 is delayed now til the end of 2010).

If you happen to be an Eclipse user, you’re in luck. Give it a try. If you’re not, well it’s a choice between those red lines under your seemingly non-existent methods and not using Lombok. But hope springs eternal for future IDE support.

Lost in Translation or Why GWT Isn’t the Future of Web Development

I recently read Is GWT the future of web development? The post postulates that GWT (“Google Web Toolkit”) is the future because it introduces type safety, leverages the existing base of Java programmers and it has some widgets.

Google has recently put their considerable weight behind it, most notably with Google Wave. I’m naturally hesitant to bet against Google or Lars Rasmussen but the fact is that’s what I’m doing.

On Type Safety and Static Typing

In the 90s type-safety and static typing ruled almost unchallenged, first with C then C++ and Java (yes I realize Pascal, Algol-68 and a plethora of other languages came beforehand). Perl was the calling card of smug, bearded Unix systems administrators.

Performance and the challenges of increasing complexity with relatively low-powered hardware (certainly by today’s standards) were the impetus behind this movement. The idea that variables didn’t need to be declared or that the type could morph as required were tantamount to the sky falling.

Between Javascript, PHP, Python, Perl, Ruby and other languages over the last decade (and yes some have a history going far earlier than that) have clearly demonstrated that indeed the sky hasn’t fallen with loose and dynamic typing.

On Leveraging Java Programmers

This sounds good in theory but let me put it to you another way: if you were to write textbooks in German would you write them in German or write them in English and have a tool convert them to German?

Anyone who has studied or knows a second language knows that some things just don’t translate. The same applies to programming languages. Javascript has lots of features that Java doesn’t: first class functions, closures, extension methods, a vastly different “this” context, anonymous objects, dynamic typing and so on.

The problems you face when writing a “cross-compiler” are:

  1. The weaknesses and limitations of the end result are the combined weaknesses of both languages (or “A union B” in a maths context where A and B are the two languages);
  2. The strengths of the end result are the common strengths (“A intersect B”) of the two languages;
  3. The idioms are different; and
  4. Abstractions are leaky. Jeff Atwood characterized this as All Abstractions Are Failed Abstractions.

This is the same basic problem with ORMs like Hibernate: Object-relational impedance mismatch. Every now and again you end up spending half a day figuring the correct combination of properties, annotations, XML and VM parameters to have a query generate the right two lines of SQL that’ll actually be performant.

Another problem is that GWT fools naive Java developers into thinking they don’t need to learn Javascript.

My position can be summed up as: GWT treats Javascript as a bug that needs to be solved.

On Widgets and Maturity

I’ve programmed with GWT. The widget selection is woeful. The standard GWT widgets look awful, even amateurish. There are some third-party options but ExtGWT is a shockingly bad library. SmartGWT looks like a better alternative (and is actually a community effort rather than a split GPL/commercial mish-mash from someone who simply doesn’t understand Java Generics). There aren’t many other choices.

Javascript has many choices: YUI, ExtJS (completely different beast to ExtGWT), Dojo, jQuery UI, SmartClient and others. Not only is there substantially more choice but the choices are substantially more mature.

Development Speed is King

Java Web apps can take minutes to build and deploy. Within certain restrictions you can hot-deploy classes and JSPs. One of the wonderful things about PHP and Javascript development is that the build and deploy step is typically replaced by saving the file you’re working on and clicking reload on your browser.

GWT compiles are brutal, so much so that significant effort has gone into improving the experience with GWT 1.6+ and 2.0. Draft compiles, parallel compilation, optimized vs unoptimized Javascript and selected targeted browsers in development. These all can help but these are in part counteracted by increasing compile times with each version.

Also compiles are only required when you change your service interfaces. Pure client-side changes can be tested by refreshing the hosted browser (or a real browser in GWT 2.0+). Serverside changes that don’t alter the interface don’t technically require a GWT recompile but this can be problematic to implement (in either Ant or Maven).

Why are long compile times a problem?

Or from The Joel Test: 12 Steps to Better Code:

We all know that knowledge workers work best by getting into "flow", also known as being "in the zone", where they are fully concentrated on their work and fully tuned out of their environment. They lose track of time and produce great stuff through absolute concentration. This is when they get all of their productive work done. Writers, programmers, scientists, and even basketball players will tell you about being in the zone.


The trouble is, getting into "the zone" is not easy. When you try to measure it, it looks like it takes an average of 15 minutes to start working at maximum productivity.


The other trouble is that it's so easy to get knocked out of the zone. Noise, phone calls, going out for lunch, having to drive 5 minutes to Starbucks for coffee, and interruptions by coworkers -- especiallyinterruptions by coworkers -- all knock you out of the zone.

Even a one minute compile can knock you out of the zone. Even Jeff Atwood—still desperately clinging to his irrational hatred of PHP like an indentity asserting life preserver—has seen the light and is a self-proclaimed Scripter at Heart.

Not Every Application is GMail

I think of a Web application as something like GMail. It is typically a single page (or close to it) and will often mimic a desktop application. Traditional Web pages may use Javascript varying from none to lots but still rely on a fairly standard HTTP transition between HTML pages.

GWT is a technology targeted at Web applications. Load times are high (because it’s not hard to get to 1MB+ of Javascript) but that’s OK because in your whole session you tend to load only one page once. Web pages are still far more common than that and GWT is not applicable to that kind of problem.

Even if you limit the discussion to Web applications, all but the largest Web applications can be managed with a Javascript library in my experience.

Now for something truly monumental in size I can perhaps see the value in GWT or at least the value of type checking. Still, I’d rather deal with dynamic loading of code in Javascript that I would with GWT 2.0+ code splitting. Compare that to, say, YUI 3 dynamic loading, which leverages terse Javascript syntax and first class functions.

Of Layers and Value Objects

It’s not secret that Java programmers love their layers. No sooner do you have a Presentation Layer, a Controller Layer and a Repository Layer than someone suggest you also need a Database Abstraction Layer, a Service Layer, a Web Services Layer and a Messaging Layer.

And of course you can’t use the same value object to pass data between them so you end up writing a lot of boilerplate like:

public class TranslationUtils {
  public static CustomerVO translate(Customer customer) {
    CustomerVO ret = new CustomerVO();
    return ret;

Or you end up using some form of reflection (or even XML) based property copying mechanism.

Apparently this sort of thing is deemed a good idea (or is at least common practice). The problem of course is that if your interfaces mentions that class you’ve created a dependency.

What’s more Java programmers have a predilection with concerning themselves about swapping out layers or putting in alternative implementations that never happen.

I am a firm believer that lines of code are the enemy. You should have as few of them as possible. As a result, it is my considered opinion that you are better off passing one object around that you can dynamically change as needed than writing lots of boilerplate property copying that due to sheer monotony is error-prone and because of subtle differences can’t be solved (at least not completely) with automated tools.

In Javascript of coruse you can just add properties and methods to classes (all instances) or individual instances as you see fit. Since Java doesn’t support that, it creates a problem for GWT: what do you use for your presentation objects? Libraries like ExtGWT have ended up treating everything as Maps (so where is your type safety?) that go through several translations (including to and from JSON).

On Idioms

Managers and recruiters tend to place too much stock in what languages and frameworks you (as the programmer candidate) have used. Good programmers can (and do) pick up new things almost constantly. This applies to languages as well. Basic control structures are the same as are the common operations (at least with two languages within the same family ie imperative, functional, etc).

Idioms are harder. A lot of people from say a Java, C++ or C# background when they go to something like PHP will try and recreate what they did in their “mother tongue”. This is nearly always a mistake.

Object-oriented programming is the most commonly misplaced idiom. PHP is not object-oriented (“object capable” is a more accurate description). Distaste for global is another. Few things are truly global in PHP and serving HTTP requests is quite naturally procedural most of the time. As Joel notes in How Microsoft Lost the API War:

A lot of us thought in the 1990s that the big battle would be between procedural and object oriented programming, and we thought that object oriented programming would provide a big boost in programmer productivity. I thought that, too. Some people still think that. It turns out we were wrong. Object oriented programming is handy dandy, but it's not really the productivity booster that was promised. The real significant productivity advance we've had in programming has been from languages which manage memory for you automatically.

The point is that Java and Javascript have very different idioms. Well-designed Javascript code will do things quite differently to well-designed Java so by definition you’re losing something by converting Java to Javascript: idioms can’t be automagically translated.


Scripting is the future. Long build and deploy steps are anachronistic to both industry trends and maximizing productivity. This trend has been developing for many years.

Where once truly compiled languages (like C/C++ and not Java/C#, which are “compiled” into an intermediate form) accounted for the vast bulk of development, now they the domain of the tools we use (Web browsers, operating systems, databases, Web servers, virtual machines, etc). They have been displaced by the “semi-compiled” managed platforms (Java and .Net primarily). Those too will have their niches but for an increasing amount of programming, they too will be displaced by more script-based approaches.

GWT reminds me of trying to figure out the best way to implement a large-scale, efficient global messaging system using telegrams where everyone else has switched to email.

Microsoft, Marketing Insanity and Windows Piracy

Thursday marks Microsoft’s release of the Windows 7 operating system. This is an opportune moment to reflect on Microsoft’s marketing strategy because its like they want me to pirate Windows. And I feel the need to rant.

A Brief History of Windows

Windows 3.0 was released in 1990 in an attempt to stave off the successful (but expensive) Macintosh. And it certainly ticked off the boxes (from a marketing perspective at least).

Various incarnations of Windows 3.x followed over the next 2-3 years. Probably the most interesting thing is that Microsoft only stopped selling Windows 3.x licenses in November 2008.

Windows 95 (“Chicago”) in 1995. What some don’t realize is that Windows 95 was in many ways technically superior to MacOS, most notably:

  • Pre-emptive multitasking rather than cooperative multitasking. Rather than waiting for an application to yield, the operating system could interrupt. Nothing new to UNIX but certainly new to Windows and MacOS;
  • Virtual address spaces. Macs at the time had to allocate memory slices to programs. Win95 programs could simply ask for more memory. Depending on your hardware, this could be physical RAM or hard disk space. The OS could swap between them while the application was running too. Again, nothing new for UNIX.

The biggest impact of Windows 95 was that it killed off non-Microsoft DOS.

Another notable feature was DirectX, Microsoft’s gaming API. This wasn’t part of the original release. It quickly supplanted OpenGL as a gaming API in the burgeoning world of hardware acceleration to the point that even stalwart advocates like id software are abandoning it but DirectX was a commercial success and the majority player almost a decade earlier.

Separately Windows NT had sprung into existence to break the connection between Windows and the DOS shell. Windows NT 4.0 in 1996 was probably the first version with broad market success, targeted at businesses.

The next notable release was Windows 2000, the successor to the venerable Windows NT 4.0, as it began the convergence of the NT and 9X (including ME) families. This culminated in 2001 with the release of Windows XP.

The Wintel Alliance

The rise of Windows and decline of IBM’s leadership of the PC coincided with marriage of convenience between Microsoft (Windows) and Intel or “Wintel”. This has never been a comfortable arrangement but it is based on a fairly simple principle:

Most people only buy operating systems with new computers.

That conventional wisdom has since been disproven by Apple but more on that later.

The dark side of this marriage is planned obsolescence. Chipsets changing, RAM standards changing, CPU sockets changing and so on. Some of it’s necessary and understandable. Savvy consumers have long figured out that buying high quality (and high cost) components for their PCs for upgrades down the line is a waste of time and money.

The biggest threat to Intel came from the disaster that was (and is) Itanium, AMD cutting them off at the knees with the hugely successful Athlon line of processors, AMD’s x86-64 instruction set putting the final nail in Itanium’s coffin and the disaster that was the Pentium 4. I say “disaster” but it was mixed. The gigahertz marketing campaign against AMD was successful. As a technology it was a disaster.

Why do I say that? Because eventually it was abandoned as Intel returned to the Pentium 3 architecture with what became the hugely successful Pentium M, Core Solo, Core Duo and Core 2 Duo releases.

A New Millenium

For those of us who purchased (and typically built) PCs in the 1990s, it was worth upgrading your PC every year or two. The newer PCs were just that much better. The market while large was much smaller than it is today to the point where Microsoft could count on a rapid turnover in PCs and a low of purchases from first-time PC owners.

As Joel Spolsky noted in How Microsoft Lost the API War:

Microsoft just waited for the next big wave of hardware upgrades and sold Windows, Word and Excel to corporations buying their next round of desktop computers (in some cases their first round). So in many ways Microsoft never needed to learn how to get an installed base to switch from product N to product N+1.

In the last 8-10 years PCs have gotten better but for increasingly more people it’s enough. My father has an old PC cannibalized from parts I bought in 2002. It runs a (modern) browser, Word and Excel and that’s all he needs. There will be absolutely no need to upgrade that PC or purchase a new one until it dies. This is the case for most consumers and businesses.

So rather than buying a new operating system every 2-3 years, the music stopped playing when Windows XP was the OS du jour. Everyone sat down and they haven’t moved since.

Interestingly, Office suffered the same problem at roughly the same time. Office 97 was basically feature complete for 95%+ of all users. Every version since has been an attempt to get businesses to buy it for it’s enterprise tinselware. Sure there have been minor improvements but overall, Office 97 is it.

Fighting Fires

As the scope of Windows has grown over the years, Microsoft has been fighting fires to defend its franchise that include:

  • Java: “run anywhere” (well, write once, test everywhere) was a threat to the Windows lock-in;
  • Games: OpenGL also threatened the lock-in since DirectX is Windows-specific;
  • Developer Tools: once Borland was a major player. Now it’s all about Visual Studio;
  • The Web: the free Internet Explorer was a desperate attempt to fend off Netscape that was ultimately successful.

The last one is important because even though Microsoft won the battle they lost the war. Microsoft’s hubris, breaking of backwards compatibility, ever-changing platforms and standards and so on probably accelerated the adoption of the Web as a platform for application delivery.

Microsoft was once an innovator trying to get market share. It was then they were at their best. At some point companies become so large that they switch from being innovators to defenders. No longer are they concerned making the best product. They are primarily concerned with defending what they already have.

The Madness Begins

Even before it was released, Vista (or Longhorn as it was called then) had a lot of people concerned. Microsoft had seemingly decided that it was OK to start breaking backwards compatibility.

Faced with people only buying an operating system (meaning a PC) every 5-8 years, what did Microsoft do? They did what most marketing eggheads would do: they raised prices. Instead of getting $100 from a consumer every 2-3 years, let’s charge $250 every 5-8 years including revenue growth to please our shareholders.

Earth to Microsoft: if you charge people more they will buy less.

Segmentation Insanity

A bigger problem was where there were basically two versions of Windows XP (Home and Professional) ignoring the Server version. In classic “how much money do you have?” pricing, there were now four versions:

  • Home Basic
  • Home Premium
  • Business; and
  • Ultimate

But it gets worse…

Retail, Upgrade or OEM?

Say what now? Try and explain this one to a non-techie. For Windows 7 this is actually worse. For example, Windows 7 Professional Upgrade is more expensive than Windows 7 Professional Retail. What the…?

32 or 64 bits?


This is perhaps the most egregious transgression. Several months ago I had a conversation with a friend who has been using PCs for 15 years and was upgrading his computer about getting the right version of Vista (32 or 64) since he was thinking about getting 4GB or more of RAM. He’s reasonably proficient. Try explaining it to someone who isn’t.

It reminds me of this classic UI blunder:

Why are you asking consumers about questions they don’t understand and don’t care about when choosing which OS to buy? Just sell one version. If an advanced user wishes to install the 64 bit version, let them do so during installation.

Activations and DRM

One of the scourges of the last decade has been the rise of DRM (“digital rights management”). As I previously said, people have an innate sense of fairness. If you tell them they don’t own something like a program, video or song even though they paid for it, they aren’t going to like it.

Even Google got sucked into this sham (probably at the behest of Big Content) and discovered the downside (for them) when they had to refund users when they closed Google Video. Woops.

Vista came with draconian activation limits. Microsoft eventually relented somewhat, particularly with the (pricey) retail version.

Now compare that to a version I can find on The Pirate Bay that simply works and you begin to see that DRM creates pirates.

Windows 7? It Gets Worse

Don’t believe me? Consider this table:


And that’s the simple version of the charge that doesn’t include 32 and 64 bit combinations. The upgrade costs have also changed as you can, say, upgrade from Windows Vista Home Premium to Windows 7 Professional.

But there’s no other choice, right? Wrong.

Timing is Everything

From the mid to late 90s Apple was in the wilderness. Windows had eroded Mac market share to the small single digits. Several attempts were made to turn this around such as Apple’s purchase of Steve Jobs’ NeXT and the Rhapsody OS.

With the return of the king, there were (eventually) two great successes. Firstly, the iPod (and more importantly) iTunes. Secondly, Jobs put the nail in the coffin for PowerPC by switching Apple hardware to Intel’s x86 architecture. Jobs stated that “power efficiency”, which many found laughable given the Pentium 4’s power and heat issues.

Jobs’ timing however was superb (and undoubtedly not luck). Intel’s Centrino platform  became all-conquering.

Some Things Are Greater Than Their Sum of Parts

This was a risky move. Differentiation is a key component of Apple’s strategy. If Macs use the same hardware as PCs, why pay extra? Macbook and iMac market share has recovered from around 2% to be almost 10% of US shipments.

Part of Apple’s appeal has always been what I term “countercultural knockback”, meaning there are a certain group of people who will attach themselves to something—sometimes fanatically—in part because it isn't popular. Another part of it is that Apple aims itself at the top end of the market quite deliberately. But a huge part that’s often overlooked by detractors is that the whole package is attractive.

Apple didn’t invent the concept of a sleek laptop, or a digital music player or a phone with Internet capability. They just did it better than anyone else.

One Size Fits All

Consumers don’t like being forced to make choices they don’t care about or don’t understand. Two years ago, Steve Jobs famously fun of Vista segmentation during his WWDC keynote.

Don’t Make Me Think

So I’m looking to buy a copy of Windows 7. I initially started on the Home Premium version. A friend pointed out to me that the XP Compatibility Mode&mdash;something I'm not certain I won't need—is only in the Professional and Ultimate versions.

Microsoft’s policy on OEM versions takes some figuring out. As it turns out it all comes down to the motherboard. Change your motherboard and you need a new OEM version. This may or may not be enforced. I’ve been using Windows XP for over 7 years. I’ve had 3 motherboards in the last 2 years so I want the retail version instead.

Well that’s going to cost $449 for the Professional version. Oddly, the Ultimate version is only $20 more. Was there really need to differentiate between these two versions for $20? Why not just have one version that covers both?

That’s a lot of money to pay for an operating system. Times have changed. No longer does a PC cost $3,000. You can buy a quad core box with 4GB of RAM for as little as $500. And I need to pay almost 100% of the hardware costs for Windows? Seriously? What does it do that XP doesn’t? Not a lot (that I care about anyway).

Are you kidding me?

Fear, Uncertainty, Doubt

FUD has been a hallmark of tech marketing. Microsoft is no exception. Just last month, Microsoft announced no TCP/IP patch for Windows XP, claiming the code was too old. Bullshit. It’s marketing strategy to convince us we need to upgrade.

They tried it with Vista too. The long-awaited DirectX 10 update was Vista only. Microsoft marketing was to suggest you might not be able to run the latest games if you have Vista instead of XP (when most hardcore gamers were sticking with XP for performance reasons).

Microsoft has been using FUD against Linux for years. There’s something amusing (even ironic) about them using FUD on one of their own products.

Lipstick on a Pig

What is Windows 7, really? I’ll give Microsoft props for one thing: the Windows 7 marketing is a success. A lot of people are excited about it. I used the RC version for a few months and it’s not bad. The NTFS support is noticeably faster and I didn’t get those stupid “Preparing to delete” boxes when I deleted a directory tree. I must admit I also like to find programs by the Start Menu speed search.

Could these features have been added on an XP base? Absolutely.

Vista has a service pack already. As far as I’m concerned Windows 7 is just Vista Service Pack 2. How really is it different to Vista? The UAC security is slightly less annoying but it’s basically the same. Maybe wireless is a bit better but these are all incremental changes.

The 90s were a pioneering period for personal computing where they went from niche to mainstream. Operating systems and applications are both mature now. Even Linus Torvalds has recognized this:

APC: When do you expect to see a kernel version 3.0? What will be the major changes or differences from the 2.6 series?


LT: We really don't expect to need to go to a 3.0.x version at all: we've been very good at introducing even pretty big new features without impacting the code-base in a disruptive manner, and without breaking any old functionality.


That, together with the aforementioned lack of a marketing department that says "You have to increase the version number to show how good you are!" just means that we tend to just improve everything we can, but you're not likely to see a big "Get the new-and-improved version 3!" campaign.

Basically there (probably) won’t be a Linux 3.0. There’s no need. Microsoft needs to recognize that need isn’t a factor for consumers. Whatever they have, it’s enough.

Old Versions Cost You Money

Anyone who has written software for a living knows this to be true: supporting old versions of your software costs you money. You want your customers to be on the latest version.

Here Apple is clearly more successful at getting their users to upgrade, in part helped by the low (US$29) cost. You can even buy 5 licenses for home for US$49. Each release seems to get bigger.

Cost is clearly a factor here. Detractors would argue Apple is charging for service packs. Maybe so. But it’s clear consumers prefer to pay less money more often.

Ship Early, Ship Often

The other way you cost yourself money is increasing the time between releases. Costs scale exponentially rather than linearly. If takes you four years to ship a product it will probably cost you twice what it does to ship two products at two year intervals.

Long releases tend to be over-ambitious releases. What’s more, there is a huge likelihood that market conditions have changed by the time you release that you’re spending a lot of effort changing an unshipped product before it even gets out the door. There is no better example than the Duke Nukem Forever debacle.

And of course complexity is the enemy. The growth in Windows lines of code shows no signs of abating.

The Business Market

This is both Microsoft’s biggest source of revenue (for both Windows and Office licenses) and its biggest thorn in the side. It’s also a problem Apple does not have.

Most companies buy PCs for their employees. To help with support costs they come up with a standard installation, called an SOE (“standard operating environment”). This version will then come on a CD that’ll install everything. It’s expensive to change and roll out. Most companies will have a Windows 2000 or XP based SOE.

A ten year old PC running Win2K and Office 97 still does its required job. This isn’t just being cheap. Why would you roll out new hardware that from a functionality perspective does the same thing? There’s no business case for it. What do you think a hospital will choose between new PCs of dubious utility and a $3 million MRI that’ll save some lives?

So it’s understandable but these people are the bane for Web developers as they’re responsible for the dogged ~10% market share for Internet Explorer 6 too.

So How Does Microsoft Sell Operating Systems?

Good question, one for which Microsoft has no answer. It probably doesn’t help that the man at the helm (Steve Ballmer) isn’t a programmer. He’s not even a techie. He’s a business guy who thinks in terms of marketing, business strategy and gap analysis. At least Bill Gates was a programmer. Bill Gates’ Microsoft was an innovator no matter what else you could say about it or him.

I have to agree with Jeff Atwood on this one. Microsoft is getting pricing wrong. Prices need to be low enough that it ceases to be a major purchase.

Microsoft Just Doesn’t “Get” Marketing

What can I say? Windows 7 Launch Parties? Microsoft retail stores (hint: Apple had compelling consumer products rather than “me too!” wannabe products before they opened stores)? Vista ads with Jerry Senfeld? I’m shaking my head.

So Which Windows 7 to Buy?

Non-coincidentally, Apple just cut the prices of Macbook Pros and released a new “low” cost white Macbook. In Australia this was at least in part due to the appreciation of the Australian dollar in recent months. A Macbook Pro 13 is now only a few hundred dollars more expensive thn a (plastic) Dell Studio XPS 13.

I’m not a .Net developer so I’m not tied to Windows. My favourite IDEs come from Jetbrains (Intellij IDEA and increasingly Web IDE) and they run on Windows, Macs and Linux.

Windows virtualization and emulation (eg WINE) are getting sufficiently good that you can run Office 2007 under Ubuntu.

I need to install Cygwin to get a workable command line on Windows anyway. It also makes Git work easier.

As a developer I’m finding the Macbook an increasingly attractive option. I only have three criticisms and concerns:

  • Apple reversed themselves on adopting ZFS as a replacement to (what Linus Torvalds described as "scary") HFS+ filesystem;
  • Apple’s bizarre stance on delaying Java releases to integrate their look and feel. Java 6 for the Mac was almost a year late; and
  • I’ll have to buy another copy of Civilization 4.

Linux is of course an option but I’ve been there and done that. Fact is, both Windows and MacOS are slicker than Gnome or KDE.


Microsoft reminds me of an ageing housewife revelling in her high school glory days whose greatest achievement is that she still fits into her cheerleader outfit. Sure you were the popular girl once but that was 20 years ago. Times have changed.

It isn’t 1995 anymore. Next Christmas most people will be fine with a $200-300 netbook. Why would such people buy that price again for an operating system (ever)? Getting existing users to upgrade is (or should be) a key strategy for Microsoft but like any incumbent, business weenies are now running the asylum and they’re more concerned about having room for revenue growth than in actually selling products people want to buy.

Bring it on Apple!