Lost in Translation or Why GWT Isn’t the Future of Web Development

I recently read Is GWT the future of web development? The post postulates that GWT (“Google Web Toolkit”) is the future because it introduces type safety, leverages the existing base of Java programmers and it has some widgets.

Google has recently put their considerable weight behind it, most notably with Google Wave. I’m naturally hesitant to bet against Google or Lars Rasmussen but the fact is that’s what I’m doing.

On Type Safety and Static Typing

In the 90s type-safety and static typing ruled almost unchallenged, first with C then C++ and Java (yes I realize Pascal, Algol-68 and a plethora of other languages came beforehand). Perl was the calling card of smug, bearded Unix systems administrators.

Performance and the challenges of increasing complexity with relatively low-powered hardware (certainly by today’s standards) were the impetus behind this movement. The idea that variables didn’t need to be declared or that the type could morph as required were tantamount to the sky falling.

Between Javascript, PHP, Python, Perl, Ruby and other languages over the last decade (and yes some have a history going far earlier than that) have clearly demonstrated that indeed the sky hasn’t fallen with loose and dynamic typing.

On Leveraging Java Programmers

This sounds good in theory but let me put it to you another way: if you were to write textbooks in German would you write them in German or write them in English and have a tool convert them to German?

Anyone who has studied or knows a second language knows that some things just don’t translate. The same applies to programming languages. Javascript has lots of features that Java doesn’t: first class functions, closures, extension methods, a vastly different “this” context, anonymous objects, dynamic typing and so on.

The problems you face when writing a “cross-compiler” are:

  1. The weaknesses and limitations of the end result are the combined weaknesses of both languages (or “A union B” in a maths context where A and B are the two languages);
  2. The strengths of the end result are the common strengths (“A intersect B”) of the two languages;
  3. The idioms are different; and
  4. Abstractions are leaky. Jeff Atwood characterized this as All Abstractions Are Failed Abstractions.

This is the same basic problem with ORMs like Hibernate: Object-relational impedance mismatch. Every now and again you end up spending half a day figuring the correct combination of properties, annotations, XML and VM parameters to have a query generate the right two lines of SQL that’ll actually be performant.

Another problem is that GWT fools naive Java developers into thinking they don’t need to learn Javascript.

My position can be summed up as: GWT treats Javascript as a bug that needs to be solved.

On Widgets and Maturity

I’ve programmed with GWT. The widget selection is woeful. The standard GWT widgets look awful, even amateurish. There are some third-party options but ExtGWT is a shockingly bad library. SmartGWT looks like a better alternative (and is actually a community effort rather than a split GPL/commercial mish-mash from someone who simply doesn’t understand Java Generics). There aren’t many other choices.

Javascript has many choices: YUI, ExtJS (completely different beast to ExtGWT), Dojo, jQuery UI, SmartClient and others. Not only is there substantially more choice but the choices are substantially more mature.

Development Speed is King

Java Web apps can take minutes to build and deploy. Within certain restrictions you can hot-deploy classes and JSPs. One of the wonderful things about PHP and Javascript development is that the build and deploy step is typically replaced by saving the file you’re working on and clicking reload on your browser.

GWT compiles are brutal, so much so that significant effort has gone into improving the experience with GWT 1.6+ and 2.0. Draft compiles, parallel compilation, optimized vs unoptimized Javascript and selected targeted browsers in development. These all can help but these are in part counteracted by increasing compile times with each version.

Also compiles are only required when you change your service interfaces. Pure client-side changes can be tested by refreshing the hosted browser (or a real browser in GWT 2.0+). Serverside changes that don’t alter the interface don’t technically require a GWT recompile but this can be problematic to implement (in either Ant or Maven).

Why are long compile times a problem?

Or from The Joel Test: 12 Steps to Better Code:

We all know that knowledge workers work best by getting into "flow", also known as being "in the zone", where they are fully concentrated on their work and fully tuned out of their environment. They lose track of time and produce great stuff through absolute concentration. This is when they get all of their productive work done. Writers, programmers, scientists, and even basketball players will tell you about being in the zone.


The trouble is, getting into "the zone" is not easy. When you try to measure it, it looks like it takes an average of 15 minutes to start working at maximum productivity.


The other trouble is that it's so easy to get knocked out of the zone. Noise, phone calls, going out for lunch, having to drive 5 minutes to Starbucks for coffee, and interruptions by coworkers -- especiallyinterruptions by coworkers -- all knock you out of the zone.

Even a one minute compile can knock you out of the zone. Even Jeff Atwood—still desperately clinging to his irrational hatred of PHP like an indentity asserting life preserver—has seen the light and is a self-proclaimed Scripter at Heart.

Not Every Application is GMail

I think of a Web application as something like GMail. It is typically a single page (or close to it) and will often mimic a desktop application. Traditional Web pages may use Javascript varying from none to lots but still rely on a fairly standard HTTP transition between HTML pages.

GWT is a technology targeted at Web applications. Load times are high (because it’s not hard to get to 1MB+ of Javascript) but that’s OK because in your whole session you tend to load only one page once. Web pages are still far more common than that and GWT is not applicable to that kind of problem.

Even if you limit the discussion to Web applications, all but the largest Web applications can be managed with a Javascript library in my experience.

Now for something truly monumental in size I can perhaps see the value in GWT or at least the value of type checking. Still, I’d rather deal with dynamic loading of code in Javascript that I would with GWT 2.0+ code splitting. Compare that to, say, YUI 3 dynamic loading, which leverages terse Javascript syntax and first class functions.

Of Layers and Value Objects

It’s not secret that Java programmers love their layers. No sooner do you have a Presentation Layer, a Controller Layer and a Repository Layer than someone suggest you also need a Database Abstraction Layer, a Service Layer, a Web Services Layer and a Messaging Layer.

And of course you can’t use the same value object to pass data between them so you end up writing a lot of boilerplate like:

public class TranslationUtils {
  public static CustomerVO translate(Customer customer) {
    CustomerVO ret = new CustomerVO();
    return ret;

Or you end up using some form of reflection (or even XML) based property copying mechanism.

Apparently this sort of thing is deemed a good idea (or is at least common practice). The problem of course is that if your interfaces mentions that class you’ve created a dependency.

What’s more Java programmers have a predilection with concerning themselves about swapping out layers or putting in alternative implementations that never happen.

I am a firm believer that lines of code are the enemy. You should have as few of them as possible. As a result, it is my considered opinion that you are better off passing one object around that you can dynamically change as needed than writing lots of boilerplate property copying that due to sheer monotony is error-prone and because of subtle differences can’t be solved (at least not completely) with automated tools.

In Javascript of coruse you can just add properties and methods to classes (all instances) or individual instances as you see fit. Since Java doesn’t support that, it creates a problem for GWT: what do you use for your presentation objects? Libraries like ExtGWT have ended up treating everything as Maps (so where is your type safety?) that go through several translations (including to and from JSON).

On Idioms

Managers and recruiters tend to place too much stock in what languages and frameworks you (as the programmer candidate) have used. Good programmers can (and do) pick up new things almost constantly. This applies to languages as well. Basic control structures are the same as are the common operations (at least with two languages within the same family ie imperative, functional, etc).

Idioms are harder. A lot of people from say a Java, C++ or C# background when they go to something like PHP will try and recreate what they did in their “mother tongue”. This is nearly always a mistake.

Object-oriented programming is the most commonly misplaced idiom. PHP is not object-oriented (“object capable” is a more accurate description). Distaste for global is another. Few things are truly global in PHP and serving HTTP requests is quite naturally procedural most of the time. As Joel notes in How Microsoft Lost the API War:

A lot of us thought in the 1990s that the big battle would be between procedural and object oriented programming, and we thought that object oriented programming would provide a big boost in programmer productivity. I thought that, too. Some people still think that. It turns out we were wrong. Object oriented programming is handy dandy, but it's not really the productivity booster that was promised. The real significant productivity advance we've had in programming has been from languages which manage memory for you automatically.

The point is that Java and Javascript have very different idioms. Well-designed Javascript code will do things quite differently to well-designed Java so by definition you’re losing something by converting Java to Javascript: idioms can’t be automagically translated.


Scripting is the future. Long build and deploy steps are anachronistic to both industry trends and maximizing productivity. This trend has been developing for many years.

Where once truly compiled languages (like C/C++ and not Java/C#, which are “compiled” into an intermediate form) accounted for the vast bulk of development, now they the domain of the tools we use (Web browsers, operating systems, databases, Web servers, virtual machines, etc). They have been displaced by the “semi-compiled” managed platforms (Java and .Net primarily). Those too will have their niches but for an increasing amount of programming, they too will be displaced by more script-based approaches.

GWT reminds me of trying to figure out the best way to implement a large-scale, efficient global messaging system using telegrams where everyone else has switched to email.

Microsoft, Marketing Insanity and Windows Piracy

Thursday marks Microsoft’s release of the Windows 7 operating system. This is an opportune moment to reflect on Microsoft’s marketing strategy because its like they want me to pirate Windows. And I feel the need to rant.

A Brief History of Windows

Windows 3.0 was released in 1990 in an attempt to stave off the successful (but expensive) Macintosh. And it certainly ticked off the boxes (from a marketing perspective at least).

Various incarnations of Windows 3.x followed over the next 2-3 years. Probably the most interesting thing is that Microsoft only stopped selling Windows 3.x licenses in November 2008.

Windows 95 (“Chicago”) in 1995. What some don’t realize is that Windows 95 was in many ways technically superior to MacOS, most notably:

  • Pre-emptive multitasking rather than cooperative multitasking. Rather than waiting for an application to yield, the operating system could interrupt. Nothing new to UNIX but certainly new to Windows and MacOS;
  • Virtual address spaces. Macs at the time had to allocate memory slices to programs. Win95 programs could simply ask for more memory. Depending on your hardware, this could be physical RAM or hard disk space. The OS could swap between them while the application was running too. Again, nothing new for UNIX.

The biggest impact of Windows 95 was that it killed off non-Microsoft DOS.

Another notable feature was DirectX, Microsoft’s gaming API. This wasn’t part of the original release. It quickly supplanted OpenGL as a gaming API in the burgeoning world of hardware acceleration to the point that even stalwart advocates like id software are abandoning it but DirectX was a commercial success and the majority player almost a decade earlier.

Separately Windows NT had sprung into existence to break the connection between Windows and the DOS shell. Windows NT 4.0 in 1996 was probably the first version with broad market success, targeted at businesses.

The next notable release was Windows 2000, the successor to the venerable Windows NT 4.0, as it began the convergence of the NT and 9X (including ME) families. This culminated in 2001 with the release of Windows XP.

The Wintel Alliance

The rise of Windows and decline of IBM’s leadership of the PC coincided with marriage of convenience between Microsoft (Windows) and Intel or “Wintel”. This has never been a comfortable arrangement but it is based on a fairly simple principle:

Most people only buy operating systems with new computers.

That conventional wisdom has since been disproven by Apple but more on that later.

The dark side of this marriage is planned obsolescence. Chipsets changing, RAM standards changing, CPU sockets changing and so on. Some of it’s necessary and understandable. Savvy consumers have long figured out that buying high quality (and high cost) components for their PCs for upgrades down the line is a waste of time and money.

The biggest threat to Intel came from the disaster that was (and is) Itanium, AMD cutting them off at the knees with the hugely successful Athlon line of processors, AMD’s x86-64 instruction set putting the final nail in Itanium’s coffin and the disaster that was the Pentium 4. I say “disaster” but it was mixed. The gigahertz marketing campaign against AMD was successful. As a technology it was a disaster.

Why do I say that? Because eventually it was abandoned as Intel returned to the Pentium 3 architecture with what became the hugely successful Pentium M, Core Solo, Core Duo and Core 2 Duo releases.

A New Millenium

For those of us who purchased (and typically built) PCs in the 1990s, it was worth upgrading your PC every year or two. The newer PCs were just that much better. The market while large was much smaller than it is today to the point where Microsoft could count on a rapid turnover in PCs and a low of purchases from first-time PC owners.

As Joel Spolsky noted in How Microsoft Lost the API War:

Microsoft just waited for the next big wave of hardware upgrades and sold Windows, Word and Excel to corporations buying their next round of desktop computers (in some cases their first round). So in many ways Microsoft never needed to learn how to get an installed base to switch from product N to product N+1.

In the last 8-10 years PCs have gotten better but for increasingly more people it’s enough. My father has an old PC cannibalized from parts I bought in 2002. It runs a (modern) browser, Word and Excel and that’s all he needs. There will be absolutely no need to upgrade that PC or purchase a new one until it dies. This is the case for most consumers and businesses.

So rather than buying a new operating system every 2-3 years, the music stopped playing when Windows XP was the OS du jour. Everyone sat down and they haven’t moved since.

Interestingly, Office suffered the same problem at roughly the same time. Office 97 was basically feature complete for 95%+ of all users. Every version since has been an attempt to get businesses to buy it for it’s enterprise tinselware. Sure there have been minor improvements but overall, Office 97 is it.

Fighting Fires

As the scope of Windows has grown over the years, Microsoft has been fighting fires to defend its franchise that include:

  • Java: “run anywhere” (well, write once, test everywhere) was a threat to the Windows lock-in;
  • Games: OpenGL also threatened the lock-in since DirectX is Windows-specific;
  • Developer Tools: once Borland was a major player. Now it’s all about Visual Studio;
  • The Web: the free Internet Explorer was a desperate attempt to fend off Netscape that was ultimately successful.

The last one is important because even though Microsoft won the battle they lost the war. Microsoft’s hubris, breaking of backwards compatibility, ever-changing platforms and standards and so on probably accelerated the adoption of the Web as a platform for application delivery.

Microsoft was once an innovator trying to get market share. It was then they were at their best. At some point companies become so large that they switch from being innovators to defenders. No longer are they concerned making the best product. They are primarily concerned with defending what they already have.

The Madness Begins

Even before it was released, Vista (or Longhorn as it was called then) had a lot of people concerned. Microsoft had seemingly decided that it was OK to start breaking backwards compatibility.

Faced with people only buying an operating system (meaning a PC) every 5-8 years, what did Microsoft do? They did what most marketing eggheads would do: they raised prices. Instead of getting $100 from a consumer every 2-3 years, let’s charge $250 every 5-8 years including revenue growth to please our shareholders.

Earth to Microsoft: if you charge people more they will buy less.

Segmentation Insanity

A bigger problem was where there were basically two versions of Windows XP (Home and Professional) ignoring the Server version. In classic “how much money do you have?” pricing, there were now four versions:

  • Home Basic
  • Home Premium
  • Business; and
  • Ultimate

But it gets worse…

Retail, Upgrade or OEM?

Say what now? Try and explain this one to a non-techie. For Windows 7 this is actually worse. For example, Windows 7 Professional Upgrade is more expensive than Windows 7 Professional Retail. What the…?

32 or 64 bits?


This is perhaps the most egregious transgression. Several months ago I had a conversation with a friend who has been using PCs for 15 years and was upgrading his computer about getting the right version of Vista (32 or 64) since he was thinking about getting 4GB or more of RAM. He’s reasonably proficient. Try explaining it to someone who isn’t.

It reminds me of this classic UI blunder:

Why are you asking consumers about questions they don’t understand and don’t care about when choosing which OS to buy? Just sell one version. If an advanced user wishes to install the 64 bit version, let them do so during installation.

Activations and DRM

One of the scourges of the last decade has been the rise of DRM (“digital rights management”). As I previously said, people have an innate sense of fairness. If you tell them they don’t own something like a program, video or song even though they paid for it, they aren’t going to like it.

Even Google got sucked into this sham (probably at the behest of Big Content) and discovered the downside (for them) when they had to refund users when they closed Google Video. Woops.

Vista came with draconian activation limits. Microsoft eventually relented somewhat, particularly with the (pricey) retail version.

Now compare that to a version I can find on The Pirate Bay that simply works and you begin to see that DRM creates pirates.

Windows 7? It Gets Worse

Don’t believe me? Consider this table:


And that’s the simple version of the charge that doesn’t include 32 and 64 bit combinations. The upgrade costs have also changed as you can, say, upgrade from Windows Vista Home Premium to Windows 7 Professional.

But there’s no other choice, right? Wrong.

Timing is Everything

From the mid to late 90s Apple was in the wilderness. Windows had eroded Mac market share to the small single digits. Several attempts were made to turn this around such as Apple’s purchase of Steve Jobs’ NeXT and the Rhapsody OS.

With the return of the king, there were (eventually) two great successes. Firstly, the iPod (and more importantly) iTunes. Secondly, Jobs put the nail in the coffin for PowerPC by switching Apple hardware to Intel’s x86 architecture. Jobs stated that “power efficiency”, which many found laughable given the Pentium 4’s power and heat issues.

Jobs’ timing however was superb (and undoubtedly not luck). Intel’s Centrino platform  became all-conquering.

Some Things Are Greater Than Their Sum of Parts

This was a risky move. Differentiation is a key component of Apple’s strategy. If Macs use the same hardware as PCs, why pay extra? Macbook and iMac market share has recovered from around 2% to be almost 10% of US shipments.

Part of Apple’s appeal has always been what I term “countercultural knockback”, meaning there are a certain group of people who will attach themselves to something—sometimes fanatically—in part because it isn't popular. Another part of it is that Apple aims itself at the top end of the market quite deliberately. But a huge part that’s often overlooked by detractors is that the whole package is attractive.

Apple didn’t invent the concept of a sleek laptop, or a digital music player or a phone with Internet capability. They just did it better than anyone else.

One Size Fits All

Consumers don’t like being forced to make choices they don’t care about or don’t understand. Two years ago, Steve Jobs famously fun of Vista segmentation during his WWDC keynote.

Don’t Make Me Think

So I’m looking to buy a copy of Windows 7. I initially started on the Home Premium version. A friend pointed out to me that the XP Compatibility Mode—something I'm not certain I won't need—is only in the Professional and Ultimate versions.

Microsoft’s policy on OEM versions takes some figuring out. As it turns out it all comes down to the motherboard. Change your motherboard and you need a new OEM version. This may or may not be enforced. I’ve been using Windows XP for over 7 years. I’ve had 3 motherboards in the last 2 years so I want the retail version instead.

Well that’s going to cost $449 for the Professional version. Oddly, the Ultimate version is only $20 more. Was there really need to differentiate between these two versions for $20? Why not just have one version that covers both?

That’s a lot of money to pay for an operating system. Times have changed. No longer does a PC cost $3,000. You can buy a quad core box with 4GB of RAM for as little as $500. And I need to pay almost 100% of the hardware costs for Windows? Seriously? What does it do that XP doesn’t? Not a lot (that I care about anyway).

Are you kidding me?

Fear, Uncertainty, Doubt

FUD has been a hallmark of tech marketing. Microsoft is no exception. Just last month, Microsoft announced no TCP/IP patch for Windows XP, claiming the code was too old. Bullshit. It’s marketing strategy to convince us we need to upgrade.

They tried it with Vista too. The long-awaited DirectX 10 update was Vista only. Microsoft marketing was to suggest you might not be able to run the latest games if you have Vista instead of XP (when most hardcore gamers were sticking with XP for performance reasons).

Microsoft has been using FUD against Linux for years. There’s something amusing (even ironic) about them using FUD on one of their own products.

Lipstick on a Pig

What is Windows 7, really? I’ll give Microsoft props for one thing: the Windows 7 marketing is a success. A lot of people are excited about it. I used the RC version for a few months and it’s not bad. The NTFS support is noticeably faster and I didn’t get those stupid “Preparing to delete” boxes when I deleted a directory tree. I must admit I also like to find programs by the Start Menu speed search.

Could these features have been added on an XP base? Absolutely.

Vista has a service pack already. As far as I’m concerned Windows 7 is just Vista Service Pack 2. How really is it different to Vista? The UAC security is slightly less annoying but it’s basically the same. Maybe wireless is a bit better but these are all incremental changes.

The 90s were a pioneering period for personal computing where they went from niche to mainstream. Operating systems and applications are both mature now. Even Linus Torvalds has recognized this:

APC: When do you expect to see a kernel version 3.0? What will be the major changes or differences from the 2.6 series?


LT: We really don't expect to need to go to a 3.0.x version at all: we've been very good at introducing even pretty big new features without impacting the code-base in a disruptive manner, and without breaking any old functionality.


That, together with the aforementioned lack of a marketing department that says "You have to increase the version number to show how good you are!" just means that we tend to just improve everything we can, but you're not likely to see a big "Get the new-and-improved version 3!" campaign.

Basically there (probably) won’t be a Linux 3.0. There’s no need. Microsoft needs to recognize that need isn’t a factor for consumers. Whatever they have, it’s enough.

Old Versions Cost You Money

Anyone who has written software for a living knows this to be true: supporting old versions of your software costs you money. You want your customers to be on the latest version.

Here Apple is clearly more successful at getting their users to upgrade, in part helped by the low (US$29) cost. You can even buy 5 licenses for home for US$49. Each release seems to get bigger.

Cost is clearly a factor here. Detractors would argue Apple is charging for service packs. Maybe so. But it’s clear consumers prefer to pay less money more often.

Ship Early, Ship Often

The other way you cost yourself money is increasing the time between releases. Costs scale exponentially rather than linearly. If takes you four years to ship a product it will probably cost you twice what it does to ship two products at two year intervals.

Long releases tend to be over-ambitious releases. What’s more, there is a huge likelihood that market conditions have changed by the time you release that you’re spending a lot of effort changing an unshipped product before it even gets out the door. There is no better example than the Duke Nukem Forever debacle.

And of course complexity is the enemy. The growth in Windows lines of code shows no signs of abating.

The Business Market

This is both Microsoft’s biggest source of revenue (for both Windows and Office licenses) and its biggest thorn in the side. It’s also a problem Apple does not have.

Most companies buy PCs for their employees. To help with support costs they come up with a standard installation, called an SOE (“standard operating environment”). This version will then come on a CD that’ll install everything. It’s expensive to change and roll out. Most companies will have a Windows 2000 or XP based SOE.

A ten year old PC running Win2K and Office 97 still does its required job. This isn’t just being cheap. Why would you roll out new hardware that from a functionality perspective does the same thing? There’s no business case for it. What do you think a hospital will choose between new PCs of dubious utility and a $3 million MRI that’ll save some lives?

So it’s understandable but these people are the bane for Web developers as they’re responsible for the dogged ~10% market share for Internet Explorer 6 too.

So How Does Microsoft Sell Operating Systems?

Good question, one for which Microsoft has no answer. It probably doesn’t help that the man at the helm (Steve Ballmer) isn’t a programmer. He’s not even a techie. He’s a business guy who thinks in terms of marketing, business strategy and gap analysis. At least Bill Gates was a programmer. Bill Gates’ Microsoft was an innovator no matter what else you could say about it or him.

I have to agree with Jeff Atwood on this one. Microsoft is getting pricing wrong. Prices need to be low enough that it ceases to be a major purchase.

Microsoft Just Doesn’t “Get” Marketing

What can I say? Windows 7 Launch Parties? Microsoft retail stores (hint: Apple had compelling consumer products rather than “me too!” wannabe products before they opened stores)? Vista ads with Jerry Senfeld? I’m shaking my head.

So Which Windows 7 to Buy?

Non-coincidentally, Apple just cut the prices of Macbook Pros and released a new “low” cost white Macbook. In Australia this was at least in part due to the appreciation of the Australian dollar in recent months. A Macbook Pro 13 is now only a few hundred dollars more expensive thn a (plastic) Dell Studio XPS 13.

I’m not a .Net developer so I’m not tied to Windows. My favourite IDEs come from Jetbrains (Intellij IDEA and increasingly Web IDE) and they run on Windows, Macs and Linux.

Windows virtualization and emulation (eg WINE) are getting sufficiently good that you can run Office 2007 under Ubuntu.

I need to install Cygwin to get a workable command line on Windows anyway. It also makes Git work easier.

As a developer I’m finding the Macbook an increasingly attractive option. I only have three criticisms and concerns:

  • Apple reversed themselves on adopting ZFS as a replacement to (what Linus Torvalds described as "scary") HFS+ filesystem;
  • Apple’s bizarre stance on delaying Java releases to integrate their look and feel. Java 6 for the Mac was almost a year late; and
  • I’ll have to buy another copy of Civilization 4.

Linux is of course an option but I’ve been there and done that. Fact is, both Windows and MacOS are slicker than Gnome or KDE.


Microsoft reminds me of an ageing housewife revelling in her high school glory days whose greatest achievement is that she still fits into her cheerleader outfit. Sure you were the popular girl once but that was 20 years ago. Times have changed.

It isn’t 1995 anymore. Next Christmas most people will be fine with a $200-300 netbook. Why would such people buy that price again for an operating system (ever)? Getting existing users to upgrade is (or should be) a key strategy for Microsoft but like any incumbent, business weenies are now running the asylum and they’re more concerned about having room for revenue growth than in actually selling products people want to buy.

Bring it on Apple!

Stackoverflow, Advertising and the Ethics of a Free Lunch

If the Internet has taught us nothing else, it has taught us that:

  1. Advertising pays for otherwise free services;
  2. People don’t like advertising; and
  3. Advertising works.

These conflicting forces always cause consternation and Stackoverflow is by no means immune.

Stackoverflow is Free

One of the most important features of Stackoverflow is that it is free to browse, ask and answer questions. People like free. It’s one reason I believe that Stackoverflow has been so well-received by programmers as a whole. Of course it has it’s detractors (most of whom seem to lurk on reddit) but as Bjarne Stroustrup says:

There are only two kinds of languages: the ones people complain about and the ones nobody uses.

The lesson being that anything—not just programming languages—that’s popular will attract countercultural  malcontents keen to assert their non-mainstream identities.

Stackoverflow Costs Money

While the content is community driven (and thus free), the site is not. It takes money for hosting, hardware, software development, administration, support issues (separate to community moderators) and so on. No one would argue with that. Yet there appears to be a disconnect between the fact that something costs money and the activities required to earn that money. Either that or people mentally file that away as Somebody Else’s Problem.

So how does a “free” service pay for itself?

Micro-Transactions Don’t Work

An excellent resource on this is Fame vs Fortune: Micropayments and Free Content:

This strategy doesn't work, because the act of buying anything, even if the price is very small, creates what Nick Szabo calls mental transaction costs, the energy required to decide whether something is worth buying or not, regardless of price.

Joel Spolsky has also spoken on this subject. In addition to the mental cost of transactions (no matter how small), Joel remarked on how people will do things for free that they will never do if paid (a small amount).

Segmentation Doesn’t Work

Market segmentation is the time-honoured technique of asking people how much money they have when they want to buy something rather than telling them what it costs, meaning what it costs is a function of how much money they have.

Joel speaks about this in-depth in Camels and Rubber Duckies.

Working my way backwards, this business about segmenting? It pisses the heck off of people. People want to feel they're paying a fair price. They don't want to think they're paying extra just because they're not clever enough to find the magic coupon code. The airline industry got really, really good at segmenting and ended up charging literally a different price to every single person on the plane. As a result most people felt they weren't getting the best deal, and they didn't like the airlines.

Perhaps it’s more correct to say segmentation doesn’t work in the long term.

Advertising Works

It’s clear that advertising works as a means of revenue. Why is it clear? If it didn’t, we wouldn’t have it. Of course, that doesn’t mean it works universally. It is obviously possible to lose money on advertising but it’s clearly possible to make money too.

Traditional media typically lied about conversion rates. Conversion rate is the percentage of visitors, users, viewers or listeners who see, hear or read an advert that take some desirable action, which could be simply clicking through or result in an inquiry, a sale or the like. Twenty years ago you’d have radio and TV marketing departments who would work up a model based on conversion rates of up to 25%. They did so because there was no way to refute their claims (other thank taking the plunge and getting disappointed with the result). With the internet such things are precisely measurable. Because the cost of distribution is so low, the conversion rates of 1 in 1000 (or less) are fine.

The other proof that advertising. Possibly 95% of email is spam, if not more. Clearly the conversion rate is non-zero otherwise they wouldn’t do it so that one guy in 10,000 who can’t find porn on the Internet (somehow) or thinks a plastic bottle of oregano will really extend his… well, you know… he is responsible for spam eclipsing legitimate email by a factor of 20-to-1.

Registration to Read Annoys People

The Evil Hyphen Site (ie Experts Exchange; deliberately no link) exemplifies this point. You can read content for free on that site if you either know where to look for free registration (deliberately not obvious) or you get to the site from Google (even though it says “register to see the answer” the answer is at the bottom of the page; try it).

This annoys people and is part of the reason that site has (justifiably) earnt so much hate.

Sometimes this registration is simply offensive, like why do I need to provide you with my date of birth and home address to read your forum post? Of course one has to wonder about what the less scrupulous operators are doing with such private information but even if you’re reputable, you don’t need it so why are you asking?

I can’t speak for anyone else, but as far as that sort of invasive information gathering goes, my name is Jimmy Hoffa, I’m 93 and I live in Afghanistan. I also run a banking company with a million employees and have an annual income of $10,000.

Stackoverflow doesn’t even require you to register to ask questions.

Alienate Your Community and You Have No Site

In this era of social sites (including crowd-sourced sites like Stackoverflow), community matters. A given solution can succeed and fail on the strength of it’s community. The same solution in different communities may succeed in one and fail in another by virtue of the different communities.

On a site like Stackoverflow the most important people are the ones who answer questions. This is a somewhat controversial opinion. The editors will disagree (or at least have a higher opinion of their worth). Don’t get me wrong: editing has value but no one celebrates the guy who edited The Great Gatsby, they celebrate F. Scott Fitzgerald.

Such communities over time can become insular (arguably incestuous). The poster child for this are Wikipedia editors, who went so far as to have a secret McCarthy-esque black list of "problem" users.

Lose your community and you lose your site. The Evil Hyphen Site has already done that.

Stackoverflow and Advertising

Originally Stackoverflow was quite light on for advertising, limited to a (mostly textual) right sidebar. The site started out looking much like this:

Now if you have less than 200 reputation it looks like this:

Interestingly, it only seems to look like this in Internet Explorer, even when I delete all my cookies. Firefox and Chrome (cookies deleted) still look like the original.

The difference? The right sidebar is “higher contrast” and there is an ad banner at the top of the question (and another further down). The top ads I believe were once text only, which is far less invasive. But Jeff has stated there won’t be any Flash or animated ads.

The latest controversy concerns the “offensive” advertising indicated to the left. Along with the "offensive" Adobe icons.

Call me crazy but I actually like these Adobe symbols on anything Flash/Flex related. It makes them easier to spot and I think it adds value. Spotting an Adobe icon is easier than finding the exact text that you’re after.

if you do find all questions tagged with one of these sponsored tags, you get this:

Is this too much? In my opinion? No. Others (naturally) disagree. Some to the point that they’ve written a script to remove such sponsored content.

Such activities, if done by a sufficiently large percentage of the userbase, undermine that site’s ability to generate revenue that pays for the site existing.

“But I Don’t Click on Ads Anyway!”

The first obvious rationalization is that basically ads don’t affect you. Bullshit. Ads do two things: they attempt to entice the user to take particular action, clicking through, buying something and so on. They also simply raise awareness of a brand, product or service. This is all about mind share. This one is subtle and hard to measure but if you see an ad or a logo often enough you’ll subconsciously recognize it.

“It’s Like Fast Forwarding Through Commercials”

No it isn’t. This defence was used in the ReplayTV lawsuit:

Yet, what the advertisers who are supporting TV are paying for is the potential that you might watch television ads. They know you might channel surf, get up and fart, go grab a smoke, or whatever. The challenge to the advertising agencies is to make commercials that you like to watch, that you want to watch. By editing out the commercials entirely, a priori, the networks can claim that ReplayTV in effect creates a derivative work that deprives them of the possibility that you might actually watch the ads. It is that possibility that generates the value of their ad space, and if something like ReplayTV were widely used, those numbers would drop, big time.

The other way to look at this is that if no one saw the ads, no sponsor would pay for them. If half the audience skipped the ads, it would be worth half as much to the sponsor and so on.

“It’s Already Loaded!”

Irrelevant. Something that’s loaded but never seen is of no value to an advertiser. Also, revenue from advertising can come from simply placing the ad, clicking through the ad or some combination of the two.

Adam Bellaire claims:

I don't think the SO guys are going to "not get paid" by a user script removing images and content after it's already loaded. Nobody can tell who or how many people are using this thing.

It could be argued that Adam believes advertisers are that clueless but it would be the height of naiveté. It’s far more likely that this is simply rationalisation.

“I Should Be Able to Opt Out.”

Adam once again pontificates:

…this is a completely opt-in script!

So what? Does it magically cost nothing to provide the service for you specifically? I must’ve missed the “This packet is intended for Adam” bit in the TCP/IP packet structure so the telcos know not to charge for it. I’ve got a good mind to write to W. Richard Stevens so he can issue an emergency addendum to his book.

“But This is ME!”

All of this comes down to what I call the BITM (“But this is ME!”) syndrome, closely related to NIMBY (“Not In My Backyard”). Examples include “I realize there is a speed limit… but this is ME!”, “I realize that I should stop at this almost red light… but this is ME!” and so on. Once again with Adam:

Or do you mean to say that as long as some people see the sponsored ads, then it's okay? Because I agree with that,

To put it another way: I understand someone needs to pay for this, I just don’t see why it should be me.

Alex Papadimoulis succinctly rebuts this:

Ad blockers are like the fat bastards at the grocery store who take handful after handful of free samples. If everyone [Ed] did it, the system would collapse and everyone would lose out. We know it, they know it, and we all just roll our eyes as they stuff their face with cut-up hot dogs and go "whhaaaat?". When they try to justify it ("it doesn't say only one, not my fault they give it away!"), it just makes 'em look worse. As I always say, at least have the decency to admit you're a bastard

The Innate Sense of Fairness

People have an innate sense of fairness to the point it can be manipulated or predicted with neurochemistry. This can work for companies if they treat their users fairly or against them if they don’t.

EA released Spore with a nauseating and invasive DRM system that limited it to three activations (later changed to five). Microsoft tried the same thing with limited activations of OEM Vista. People naturally believe that if they buy something they should be able to reinstall it as many times as they want.

A lawyer may argue that you haven’t bought the software, you’ve bought a license to use it in a limited way. But we’re not talking what’s legal here. We’re talking what’s fair.

It’s this sense of fairness that will cause people to reject the underhanded tactics of the Evil Hyphen Site to get you to subscribe or pirate some piece of software they’ve bought that has run out of activations (or simply installs a rootkit allowing your system to be hacked). So I guarantee you that if the Jeff and Joel go too far with advertising the userbase will react. But we’re not there yet. Nowhere near it.

There’s No Such Thing as a Free Lunch

Sites cost money to develop, maintain and host. They have a right to earn revenue to cover their costs and make a return on the investment they’ve made (and risked). So what’s fair?

My personal opinion is that icons on tags are OK, sponsored links at the top are OK (if you have a problem with that I guess you don’t use Google or pretty much any other search engine) and the side bar is OK. I find the graphical ads littered throughout the question a bit much but then again I have more than 200 reputation so don’t see them.

For what it’s worth I think I’ve even clicked on a couple (Telerik and SpreadsheetGear spring to mind) of the many thousands I’ve no doubt seen but, as mentioned, such a low conversion rate is to be expected.

The euphemism “opt out” in this context is akin to “it’s OK to steal from the supermarket as long as no one else does it”. If you want good services like Stackoverflow to exist then on principle alone you should be supporting them.

Writing scripts to block ads is just selfish. What’s more if the ads offend you that much it suggests a certain irresponsibility, thoughtlessness, touchiness and intolerance that doesn’t speak well of your character.

Seriously, get a clue.