A New Hope--er... Look

I only started this blog recently. A stock standard Blogger template was a fairly natural choice at the time. Since then I've slowly modified it by:

  • adding social news links;
  • adding StackOverflow flair;
  • making the title more SEO friendly; and
  • adding rudimentary support code posting code snippets.

But it was always my plan to make this blog a little less generic-looking and here is the result!

I decided that I wanted a three column layout with the content on the far left. I also wanted it to be fairly "light", clean and modern. I stumbled across the most excellent Dilectio template and that's what you see here now. I envision customizing it further in days to come.

Still on the list is adding Syntax Highlighter and that should hopefully happen shortly.

Please let me know what you think and how you find the new look.

This is the old site:

Apologies for the broken comments! Now fixed.

The Monetization of Java Begins?

I was rather disturbed--and a little alarmed--today to read that Slashdot is reporting that the much-anticipated G1 garbage collector will not be free. The release notes for Java 6 Update 14 state:

Although G1 is available for use in this release, note that production use of G1 is only permitted where a Java support contract has been purchased. G1 is supported thru Sun's Java Platform Standard Edition for Business program.

The open-source community is (and always has been) a little paranoid about open source efforts by large companies. And just because you're paranoid doesn't mean they're not out to get you. This is natural because companies ultimately answer to their shareholders first and foremost and typically are only one CEO change away from using some combination of software patents, trade secrets, copyright, trademark enforcement, litigation and lobbying for legislation to enforce their intellectual property "rights".

Sun has always been a reluctant (and arguably fair-weather) supporter of open-source. The Java open source story really entered the limelight in the late 90s when there was a concerted effort to create an ISO standard for Java. Arguably at this time Microsoft was seeing to blunt or even fracture the Java behemoth that threatened to loosen users' lock-in to Windows. Ultimately Sun won this battle by winning the highly unusual position as official submitter for the Java ISO standard in 1997. I say "highly unusual" because it is typically industry bodies or standards organizations that submit ISO standards, not large companies.

Well, no standard was submitted and this effort was ultimately abandoned in 1999. Java open source efforts ultimately culminated with Sun's release of OpenJDK under the GPL with some minor exceptions relating to third-party code in the class library.

Does this signal Oracle's intent to create tiers of Java? This must surely be a big nail in the coffin for the "write once, run anywhere" mantra that initially propelled Java to prominence even though it at best had a lot of caveats and at worst was a myth and that's still the case. "Write once, test everywhere" is arguably more accurate.

This move threatens Java in three important ways:

  1. It undermines confidence Sun/Oracle's credibility to lead the platform forwards. Credibility is crucial;
  2. Customers--both current and potential--must now consider the very real risk of a fractured or tiered platform; and
  3. With OpenJDK available under the GPL, the risk of Java forking is very real. Sun's stewardship of MySQL has already resulted in that project being forked. Forking can be healthy but for a Java, which is a language and class library that has succeeded at least in part due to it's consistency and ubiquity--forking would, in this writer's opinion, be a disaster.

Remember, not even Microsoft differentiates .Net with commercial and non-commercial features. Sure, Mono notwithstanding, there is a tie-in to Windows but that's not the same thing.

One might reasonably argue "but it's only a garbage collector". To a degree, that's a fair observation. The danger is not in this one feature but rather what it signals to the Java community and what the Java community assumes it to mean, which are not necessarily the same thing. The risk of a self-fulfilling prophecy here is very real.

Last year saw the advent of similar stirrings in SpringSource following the venture capital funding they received. They announced a significant change in distribution policy:

  • Three months after the release of a major version of Spring, it will no longer have patch releases made available publicly
  • All fixes will continue to be committed to the public source repository and
  • paying customers will have access to later point-releases, but
  • those releases will not be tagged publicly

Don't be fooled: those changes are significant. If your application was shipped with Spring 2.0.8 and you need to pass on a fix released and committed to the public repository, you no longer download the next release. Basically you will be responsible for packaging and tagging a particular Spring release in this instance.

Back to Java, it has already ceded the desktop to the Web (as part of a much larger trend) and .Net. Java now is all about the server and I guarantee you that something like garbage collection is of critical importance to many server-based applications. Supporting and testing against multiple garbage collectors (eg for caching) increases the cost and the risk to the vendor. How soon before particular products will require such features?

To all the Java developers out there: watch this issue very carefully. Be vocal in your opposition to the fracturing of Java.

ORMs vs SQL: The JPA Story

I previously wrote about ORMs vs SQL and received a lot of reaction to it--most of it positive. Some of it was predictable ("you don't know what you're talking about (because you don't agree with me)") but one reaction from a couple of people surprised me: they took my post to mean that I was against persistence abstractions. I will now expand on those points with a specific example: the Java Persistence API ("JPA").

Some History

Hibernate was the first really successful project to try and create an object model on top of a relational one. It was--and still is--quite popular. It is clearly the most popular Java ORM. Through the better part of the last decade Spring and Hibernate were the de facto Java enterprise standard.

Other projects have come along to do much the same thing (TopLink, OpenJPA, EclipseLink and so on). Of course Sun intervened and did what they always do: tried to standardize things by creating JPA 1.0 as the persistence layer of the EJB 3.0 specification.

Standardization

To quote Joel Spolsky:

When you try to unify two opposing forces by creating a third alternative, you just end up with three opposing forces. You haven't unified anything and you haven't really fixed anything.

Sun's track record here is terrible. They made a dog's breakfast of logging (JDK 1.4), have been pushing the (still) unsuccessful JavaServer Faces ("JSF") Web application framework (sorry but the defiant cries of "next release/year, it will take off" have a certain "boy who cried wolf" quality after 7-8 years), ignored the already-adopted and popular OSGi standard for their own Java module system, have made a stillborn fray into rich client territory with JavaFX and have shown an inability to lead the community on Java 7 and advancement of both the language and the platform.

JPA 1.0

Nevertheless, we did get the JPA 1.0 spec as the persistence layer for EJB 3.0, which boldly embraced the POJO philosophy trail blazed by Spring years earlier. JPA represents the lowest common denominator between the various ORMs that support it. It hasn't really unified anything and it hasn't really fixed anything. In fact, a good case can be made that standardization wasn't even necessary.

That being said, JPA isn't bad. It just has a lot of limitations such that your chances of not using any provider-specific extensions on any real project is almost zero.

Anyway, Oracle donated (part of) their TopLink product that became TopLink Essentials, which is the reference implementation of JPA 1.0. Just like Sun ignored Log4J in the logging debate (yes, yes, I know the JDK logging can wrap Log4j), one has to wonder why they bypassed Hibernate but I guess we should expect that by now.

Oracle donated TopLink to the Eclipse Foundation in 2006 and that became EclipseLink, a product I’ve used a lot and respect a lot in this space. It has some nice features that I've found no equivalent for in Hibernate (but I digress). EclipseLink 2.0 will be the reference implementation of the imminent JPA 2.0 specification as part of EJB 3.1 in JEE 6 (did you get all that?).

Complexity

While all these libraries do basically the same thing, they are fundamentally different in their implementation and that's the first problem. What happens when you try and use these outside of a J(2)EE container?

My point here is that there is an awful lot of complexity here just for one feature: lazy fetching of associated entities.

Differences

This is a list of some of the differences and extensions for some of the JPA providers. This list is by no means exhaustive but it illustrates my point:

  • EclipseLink has the @PrivateOwned annotation, for automatically deleting child records that are removed from the collection. Programmers often mistakenly think that's what CascadeType.DELETE does. Not so;
  • EclipseLink has the BATCH query hint, which is incredibly useful for mass loading of a large number of entities with discriminated type. This is something for which I have no found no Hibernate equivalent. I'll happily be proven wrong on this one;
  • Performance of differnet JPA providers can be hugely different (although I think that test doesn't do EclipseLink justice); and
  • The properties and setup are different.

Whenever there are standards there will be differences from different providers. But when common functionality is not sufficient to the point that (typically extensive) use of extensions is a given an arguably unnecessary "standard" becomes pointless or even counterproductive.

Problems

JPA is certainly not without problems. What comes to mind is:

  • Native queries are really awkward to use, returning Object arrays with multiple selects. This is, in part, Java's fault compared to, say, C#, which neatly gets around this with the "var" type (which is really just syntactic sugar for reflection on properties);
  • JPA can be a real black box for generating SQL;
  • Composite keys are really awkward to use. So much so that composite primary keys are often described as "legacy" in JPA texts, blogs and articles;
  • Entities, despite the claims of being POJOs, really aren't. They're typically unsuitable for transmission over a network, conversion to JSON and so on, typically requiring a translation layer;
  • No standard support for filtering collections. For example, a Customer entity may have several child Accounts, only 1-2 of which are active (marked with a flag). JPA doesn't really support just joining across the "active" children in this scenario; and
  • JPA QL is another language you have to learn with little to no tooling support. It's not as capable as SQL is either (hence the need for native queries).

Again, this list is illustrative not exhaustive.

All of these things constitute part of the complexity cost for the "completeness" of the abstraction I talked about it in my previous post.

Leakiness

Returning once more to the concept of leaky abstractions, a high price has been paid in complexity (eg dynamic weaving) with a lot of provider differences from the "standard" and the abstraction is still leaky. The best example of this is:

How many of you have spent half a day trying to figure out which arcane combination of XML, properties, VM parameters and annotations will product performant SQL?

Conclusion

My goal here isn't to deride or diminish JPA or any particular provider. Like I said, I like EclipseLink (in the right applications). Even so and even after using it for a year, I'm still scratching my head trying to figure out how some of it works (eg the session management).

This quest for "simplicity" (being an object model in your persistence layer) is so incredibly complex both in use and in implementation that I believe it has reached the point of (often) being counterproductive.

This is exacerbated by a certain kind of programmer who believes that the point of an abstraction is to avoid learning or understanding the underlying technology, a philosophy I vehemently oppose. If you're doing JPA, you still need to know databases and SQL. If you're using a Web application framework, you still need to know the servlets API and how HTTP works at least at a high level.

So what's the alternative?

This leads me into something I'll discuss at length next week: Ibatis. I firmly believe that Ibatis is the premiere Java ORM framework. It is capable of doing 90-95% of what JPA can do with significantly lower complexity and a significantly lower learning curve. But more on that next week.

Next: The JPA Story

ORM or SQL: Are We There Yet?

A decade ago I was first introduced to servlets. It was a revelation. Prior to that I (like most people) had been doing CGI scripts (in Perl and even C and C++). Suddenly we had something that persisted between requests. Combined with JDBC you had a pretty powerful platform.

In 2001 or so began the madness that was the Java world's love affair with EJBs. I came into it at the EJB 1.1 stage with EJB 2.0 just on the horizon. A consultant at the time decided it was a good idea to write a logging service as an entity bean. It took roughly half a second to write a log message!

This began my questioning of and ultimate departure from the EJB. Going through 2002, cracks were surfacing. It became widely accepted that EJBs weren't suitable for high throughput applications. The pro-EJB crowd argued that EJBs weren't suitable for all applications but that shouldn't mean you dismiss the technology.

Experience has taught me that when someone wheels out this argument of selective observation that it's a huge red flag.

Apart from stateless sessions beans, which are a fairly cheap and effective way of getting distributed transactions in a clustered J2EE environment, EJB (pre-3.0) was basically a bad idea.

By this stage J2EE had started to fracture. Soon would come Spring, which would change the Java server landscape forever. The other big change was Hibernate.

Hibernate became the poster-child for post-EJB OO fanatics. I was late to this particular party. I'd gone back to doing plain SQL and was happy. Less than two years ago I was forced to learn JPA and I gave it a fair shake of the stick, I really did.

The relational-object divide has been a divisive issue for many years. Jeff Atwood went as far as saying Object-Relational Mapping is the Vietnam of Computer Science.

I am a steadfast believer that abstractions are leaky. And an object model on top of a relational model is an abstraction. To jam this into the Java/J2EE world, apparently mechanisms like load-time weaving were required.

Now I believe that those behind such changes were (and are) well-intentioned but, as the quote goes, the road to hell is paved with good intentions. I see the problem like this: imagine a Cartesian graph with one axis representing complexity and the other representing the completeness of the abstraction. The more "complete" it is, the less leaky it is. I picture such a graph would produce an hyperbola. JDBC is simple because it's not much of an abstraction. Hibernate and other JPA providers are incredibly complex because they are attempting to be complete.

There are two corollaries you can draw from this analogy that I think fit:

  1. To achieve a perfect abstraction the solution would be infinitely complex (ie impossible); and
  2. There is a "sweet spot" in the middle where a little abstraction has a large reward but, beyond a certain point, there is a law of diminishing returns.

I also believe that every developer should be educated in relational algebra, comfortable with databases and proficient in SQL. Just like any Java Web developer should understand the servlets API before they can truly truly appreciate and properly leverage a higher-order MVC framework like Spring MVC or Struts 2. Ultimately, you still need to know something about how HTTP works. The same applies to SQL.

This is why I believe the effort to create the "perfect" object model for relational databases is futile and ten plus years later we're still not there. And you know what? I don't think we'll be there in another ten years either. If anything, the issue will become a non-issue as the traditional relational database will be replaced by "slacker" databases, persistent caches or whatever comes after that.

Until then, perhaps we should stop trying to fit a round peg into a square hole. SQL just works, it's not that hard and your application will be less complex as a result. That can only be good.

Supercharging CSS, Part 3: CSS Variables

Previous: Themes

It is often the case when skinning or theming a new version of a site that most of the CSS is the same and all we'll do is change a few colors and little else. Each site will tend to have a palette of ten or less colours (usually five or less) that are subject to this kind of change. Yet there will be numerous references to the same values and it's tedious and error-prone to change.

This has long been an issue for Web designers and developers so much so that there is now a proposed standard for their implementation. The problem with any new CSS feature is of course browser support. CSS 2 was introduced in 1998 (although CSS 2.1 didn't become a candidate recommendation until almost nine years later) and we still can't rely on total support for it.

The proposed standard would use syntax such as this:

@variables {
  CorporateLogoBGColor: #fe8d12;
}

div.logoContainer {
  background-color: var(CorporateLogoBGColor);
}

Although this issue is a decade old, agreement is far from universal. Some argue CSS variables are harmful while others argue CSS variables are unnecessary.

My personal experience has been that stylesheets often become large and unwieldy. Retheming a site becomes a daunting prospect. I see CSS variables as nothing more than the declaration of semantic intent or context no different to using constants in programming languages. In other words, they have the potential to make stylesheets far more readable and maintainable. We can't wait for IE9 and FF4 to be the baseline for browsers however so we have to do this ourselves. PHP to the rescue.

The solution I'm proposing here isn't new. Dynamically generated stylesheets with variable substitution are not a new idea. I'm simply extending the script developed thus far to include this capability and explaining what it is we're trying to achieve and why.

Rather than copy the proposed syntax for CSS Variables, which would be non-trivial to parse and process reliably, I am going to use a far more "PHP-ish" solution and use variables beginning with a dollar sign, as this is a rarely occurring character in stylesheets.

<?php
define('SCRIPT_DIR', $_SERVER['DOCUMENT_ROOT'] . '/css/');
define('CACHE_DIR', $_SERVER['DOCUMENT_ROOT'] . '/cache/');
define('PROPERTIES_EXTENSION', '.properties');

$bundles = array(
  'site' => array(
    'superfish.css',
    'tooltips.css',
    'site.css',
  ),
);

$site = $_GET['site'];
if (!isset($bundles[$site])) {
  error_log("css.php: Unknown bundle '$site' requested");
  exit;
}

$mtime = $_GET['mtime'];
$cache_file = CACHE_DIR . $site . '.css';
$cache_mtime = @filemtime($cache_file);

// we need to rebuild is the passed in mtime is newer than the cache file mtime
if ($mtime > $cache_mtime) {
  $css = '';
  foreach ($bundles[$site] as $file) {
    $contents = @file_get_contents(SCRIPT_DIR . $file);
    if ($str === false) {
      error_log("css.php: Error reading file '$file'");
    } else {
      $css .= $contents;
    }
  }
  $css = replace_variables($site, $css);
  file_put_contents($cache_file, $css);
} else {
  $css = file_get_contents($cache_file);
}

header('Content-Type: text/css');
header('Expires: ' . gmdate('D, d M Y H:i:s', time()+365*24*3600) . ' GMT');
header('ETag: "' . md5($scripts) . '"');

if (is_buggy_IE()) {
  ob_start();
} else {
  ob_start('ob_gzhandler');
}

echo $css;

function is_buggy_IE() {
  $ret = false;
  $agent = $_SERVER['HTTP_USER_AGENT'];
  if (strpos($agent, 'Mozilla/4.0 (compatible; MSIE ') === 0 && strpos($ua, 'Opera') === false) {
    $version = floatval(substr($agent, 30));
    if ($version < 6) {
      $ret = true;
    } else if ($version == 6 && strpos($agent, 'SV1') === false) {
      $ret = true;
    }
  }
  return $ret;
}

function replace_variables($site, $css) {
    global $css_variables; // this needs to be accessible
    $file = SCRIPT_DIR . $site . PROPERTIES_EXTENSION;
    $contents = file_get_contents($file);
    $lines = explode("\n", $contents);
    $css_variables = array();
    foreach ($lines as $line) {
        $line = preg_replace('!//(.*)$!', '', $line); // allow for comments
        $line = trim($line);
        if (!$line) {
            continue;
        }
        list($k, $v) = explode('=', $line);
        $k = trim($k);
        $v = trim($v);
        if (isset($css_variables[$k])) {
            die("Variable '$k' already set");
        }
        if (!preg_match('!^[\w_]+!', $k)) {
            die("Illegal variable '$k'. Must be letters, digits or underscore only.");
        }
        $css_variables[$k] = $v;
    }
    return preg_replace_callback('!\$([\w_]+)!', 'replace_variable', $css);
}

function replace_variable($matches) {
    global $css_variables;
    $var = $matches[1];
    if (!isset($css_variables[$var])) {
        // if we're strict, we could die here
        //die("Unknown variable '$var' encountered in CSS");
        // more loosely we could just reeturn the expression unchanged
        return '$' . $var;
    }
    return $css_variables[$var];
}
?>

This script will load some values from .properties and substitute $variables from the CSS for those values. Once again this behavior can be extended to get values based on user preferences, from the database or whatever you want to implement. The end result is browser-compatible yet much more powerful than "plain" CSS.

What about the performance? The use of regex replacement is quite cheap in this situation--much cheaper than minification is from the Javascript example. Because the end result is cached so aggressively any such cost is minimized by virtue of the client just not getting it that often.

Once more, feel free to use the code in any way you wish. Drop me a line or leave a comment if this was helpful to you or you find an issue or simply have a suggestion.

Supercharging CSS, Part 2: Themes

Previous: GZipped and Cached CSS

It is a reasonably common requirement--or at least it has been at least in my professional experience--to run several sites off that are very similar and indeed have virtually identical functionality. The usual reason for doing this is that different clients either want to integrate the functionality you provide into their own service offerings or a number of companies become "virtual resellers". There are two general approaches to doing this:

  1. Have separate installations that need to be separately maintained; or
  2. Use the same installation of code where the code is clever enough to act differently for whichever site it is running based on the request.

(1) leads to a lot of code repetition and generally maintenance becomes harder as you may need to modify and/or deploy a bunch of identical of near-identical changes. It is tempting to dismiss this approach out of hand but it does have its place: every condition in your code complicates your code and changes meant for oen site can potentially regress other virtual sites so the code can be cleaner this way. The rule of thumb is that if sites are more different than similar then this approach might be appropriate. That is obviously a subjective test.

The other more common approach is for the code to behave differently depending on what site the request is serving. This approach is particularly appropriate when sites are more similar than different. The obvious discriminator is the fully-qualified hostname of the site. I will usually end up with some common code that is executed on every PHP page that might go something like this:

<?php
if ($_SERVER['SERVER_NAME'] == 'www.example.com') {
  define('SITE_NAME', 'example');
} else if ($_SERVER['SERVER_NAME'] == 'www.myhost.com') {
  define('SITE_NAME', 'myhost');
} else {
  die("Unrecognized host name $_SERVER[SERVER_NAME]");
}
?>

Of course this is a simple example. The configuration might be more dynamic, possibly database-driven and there is quite likely to an awful lot more default values than these that are used (eg correct contact information, homepage content, header and footer files, copyright notices, terms and conditions, menu items and structure and so on).

So instead of identifying the "bundle" of CSS files by a supplied parameter, we simply change it to do it by username.

$bundles = array(
  'www.example.com' => array(
    'reset.css',
    'superfish.css',
    'example.css',
  ),
  'www.myhost.com' => array(
    'reset.css',
    'superfish.css',
    'myhost.css',
  ),
);

$site = $_SERVER['SERVER_NAME'];
if (!isset($bundles[$site])) {
  error_log("css.php: Unknown bundle '$site' requested");
  exit;
}

This logic can be as simple or complex as desired. For example, the "site" GET parameter (via rewrite) can be used in conjunction with the server name.

Alternatively, you could even implement user themes this way. The user could select the theme they're interested in. The link_css() function would return the correct URL to ask for that theme and css.php could then return the right bundle of CSS for that theme.

The possibilities are really endless.

Next: CSS Variables

Supercharging CSS, Part 1: GZipped and Cached CSS

Firstly, the rewrite rules:

RewriteEngine On
RewriteBase /
RewriteRule ^style/(\w+)\.(\d+)\.css$ /css.php?site=$1&mtime=$2 [L]

Second, the external CSS reference:

<?php
define('SCRIPT_DIR', $_SERVER['DOCUMENT_ROOT'] . '/css/');
define('SCRIPT_PATH', '/style/');

$bundles = array(
  'site' => array(
    'reset.css',
    'superfish.css',
    'site.css',
  ),
);

function link_css($site) {
  global $bundles;
  if (!isset($bundles[$site])) {
    die("css.php: Unknown bundle '$site' requested");
  }
  $mtime = 0;
  foreach ($bundles[$site] as $file) {
    $file_mtime = filemtime(SCRIPT_DIR . $file);
    if ($file_mtime !== false && $file_mtime > $mtime) {
      $mtime = $file_mtime;
    }
  }
  return SCRIPT_PATH . $site . '.' . $mtime . '.css';
}
?>

Third, the reference in the Web page:

<link href="<?php echo link_css('site') ?>" rel="stylesheet" type="text/css">

And lastly, the script the generate the CSS:

<?php
define('SCRIPT_DIR', $_SERVER['DOCUMENT_ROOT'] . '/css/');
define('CACHE_DIR', $_SERVER['DOCUMENT_ROOT'] . '/cache/');

$bundles = array(
  'site' => array(
    'reset.css',
    'superfish.css',
    'site.css',
  ),
);

$site = $_GET['site'];
if (!isset($bundles[$site])) {
  error_log("css.php: Unknown bundle '$site' requested");
  exit;
}

$mtime = $_GET['mtime'];
$cache_file = CACHE_DIR . $site . '.css';
$cache_mtime = @filemtime($cache_file);

// we need to rebuild is the passed in mtime is newer than the cache file mtime
if ($mtime > $cache_mtime) {
  $css = '';
  foreach ($bundles[$site] as $file) {
    $contents = @file_get_contents(SCRIPT_DIR . $file);
    if ($str === false) {
      error_log("css.php: Error reading file '$file'");
    } else {
      $css .= $contents;
    }
  }
  file_put_contents($cache_file, $css);
} else {
  $css= file_get_contents($cache_file);
}

header('Content-Type: text/css');
header('Expires: ' . gmdate('D, d M Y H:i:s', time()+365*24*3600) . ' GMT');
header('ETag: "' . md5($css) . '"');

if (is_buggy_IE()) {
  ob_start();
} else {
  ob_start('ob_gzhandler');
}

echo $css;

function is_buggy_IE() {
  $ret = false;
  $agent = $_SERVER['HTTP_USER_AGENT'];
  if (strpos($agent, 'Mozilla/4.0 (compatible; MSIE ') === 0 && strpos($ua, 'Opera') === false) {
    $version = floatval(substr($agent, 30));
    if ($version < 6) {
      $ret = true;
    } else if ($version == 6 && strpos($agent, 'SV1') === false) {
      $ret = true;
    }
  }
  return $ret;
}
?>

What we have there can already make a substantial difference to site usability and response times.

But why stop there?

Next: Themes

Supercharging CSS in PHP

This is a follow-on to Supercharging Javascript in PHP. The issues with optimizing and caching CSS are almost identical to those of Javascript (save minification).

This guide will use those same techniques for CSS and extend the power of CSS to more easily allow theming and skinning sites.

Because the issues like reducing external HTTP requests, gzipping, caching and the Internet Explorer problem with gzip are all identical, I won't repeat myself on the "why" of those points. For that detail, please read the preceding article.

Battle for the Rich Client

Let me just gush for a moment and say that Joel Spolsky is a God. I have long extolled the virtues of every programmer reading Joel on Software religiously. It is the #1 programming blog by a country mile because it is the best bar none.

Just today I came across How Microsoft Lost the API War. In it Joel questions the sanity of breaking backward compatibility with the (then) upcoming Windows Vista (then codenamed Longhorn). He also states that the desktop just doesn't matter anymore because it's all about the Web.

This is pretty much accepted wisdom now but what makes this posting amazing is that it was written in 2004.

Microsoft is fighting a desperate rearguard action on two fronts. Firstly, it is struggling to stay relevant as a Web content delivery platform, which with ASP.NET it's doing a fairly reasonable job on. Love it or hate it, ASP.NET is popular but with PHP, Java, Ruby and Django (pretty much in that order) all vying for that particular crown, the battle is far from won.

The second--arguably more interesting--front is for the rich client. This space is currently dominated by Adobe with Flash/Flex. Sun in the last year made a stillborn play for this market with JavaFX. Microsoft has a far more plausible solution with Silverlight.

Silverlight on paper has a lot going for it. It leverages the .Net platform, you can write Silverlight applications in any .Net language and (more importantly) you can use the same code in the client and the server (within the constraints of the Silverlight subset of APIs). This last point is a compelling advantage over Flash/Flex where you have one (arguably rudimentary) language for the client and something else (eg Java) on the server.

I have said--and maintain--that "Windows is by far .Net's biggest Achilles heel". .Net is designed to sell Windows licenses so it's no surprise that it is Windows-centric. So desperate is Microsoft is to succeed here that they took the unprecedented move of supporting the Moonlight project and an Eclipse plugin for development.

The desperation is more apparent with the Silverlight 3 announcement and how it cannibalizes WPF.

Two important events have happened recently that have knocked the wind out of Microsoft:

  1. NBC dumps Microsoft Silverlight after Olympics Note: Not everyone agrees with this characterization; and
  2. New York Times Dropping WPF/Silverlight for Adobe AIR.

On the Times:

Unfortunately the Silverlight version has been plagued with problems, both political and technical. The biggest hurdle was the lack of cross-platform support. Though based on WPF or Windows Presentation Foundation, Silverlight only has a subset of WPF’s capabilities. This makes writing code that works on both difficult and most developers seem to end up maintaining two separate code bases. Silverlight 2.0 is designed to run within a browser, a limitation not found in WPF. Apple users, who tend to be sensitive to such issues, rightfully complained about not having all the same features as Windows users.

Whether true or not, justified or not or overblown or not, this is a big deal and alarm bells should be ringing in Redmond. The problem, as I see it, is that Microsoft lacks singularity of purpose.

The ipod became the behemoth of portable music in part because Apple decided to make the best digital music player they could (although it was originally--briefly--Mac only). Sony's Playstation 3 shipped late and is struggling against the Xbox 360 and Nintendo Wii in part because they used it as a pawn against Toshiba for the next-gen optical format (and that may yet turn into winning the battle but losing the war). Internet Explorer 4 decimated Netscape because Microsoft had decided to make the best browser they could that ran as fast as possible while Netscape were trying to build a communications platform.

The common theme here is singularity of purpose. Those who have it tend to succeed against those who don't. Every competing goal is a compromise.

Microsoft may yet succeed with Silverlight but the price of victory might be Windows. Is Redmond willing to pay that price?

CAPTCHA: Too Much of a Good Thing?

I'm a big fan of StackOverflow on which I am a frequent contributor. If you are a programmer who is tired of trawling forums filled with old answers or using the "evil hyphen site" (as Joel puts it) then you should seriously go check it out. Joel Spolsky (of Joel on Software fame) recently gave a Google Tech Talk about StackOverflow that is well worth watching.

StackOverflow does a pretty good job of keeping spammers and abuse away while remaining open and accessible, although in February they were forced to limit question and answer rate for new users in response to one particular incident.

Joel correctly makes the point that behaviour is to some degree a function of environment. And herein lies my particular beef. Jeff Atwood (of Coding Horror fame), when discussing the StackOverflow Question Lifecycle said that:

...take a long, hard look at how bad the sofaq tag has become...I spent a few hours cleaning it up tonight and I barely made a dent in the sprawling mess it has become.

Just today I decided to do a little clean up of my own and remove the "gae" tag. It was used 15 times. Out of those 15 questions, 14 were also tagged "google-app-engine", which is used over 200 times already. So basically it is superfluous. Anyway, in the course of retagging a mere 15 messages I was presented with CAPTCHA SEVEN times!

Now I'm all for keeping spam and abuse at bay but this has reached the point of discouraging positive behaviour ie cleaning up the site. So it should come as no shock that messes like the SO FAQ have come about and will no doubt persist.

Interestingly, I recently came across an article titled The Death of CAPTCHA. Personally I look forward to the day that we have some better solution than challenging legitimate users with sometimes almost indecipherable sigils (have you seen Google's own CAPTCHA?) that really turn a lot of people off.

At some point you just have to look at the content or what's being done and not just use brute force rate limiters as a band aid solution.

Supercharging Javascript, Part 6: The Internet Explorer Problem

Previous: Caching on the Client

Of course, no discourse on Web development would be complete without the inevitable "What about IE6?" question and this one is no exception.

Unpatched versions of Internet Explorer 6 do not correctly decompress data using the GZip compression method. Thankfully this is an increasingly uncommon problem because IE6 is dying and unpatched versions are increasingly rarer.

But it is still best practice to detect buggy versions and disable GZip compression (ignoring the GZip accept encoding header) on those versions. Unfortunately ob_gzhandler() does not do this automatically.

<?php
define('SCRIPT_DIR', $_SERVER['DOCUMENT_ROOT'] . '/script/');
define('CACHE_DIR', $_SERVER['DOCUMENT_ROOT'] . '/cache/');

$bundles = array(
  'site' => array(
    'jQuery-1.3.2.js',
    'jquery.bgiframe.js',
    'jquery.dimensions.js',
    'supersubs.js',
    'superfish.js',
    'site.js',
  ),
);

$site = $_GET['site'];
if (!isset($bundles[$site])) {
  error_log("javascript.php: Unknown bundle '$site' requested");
  exit;
}

$mtime = $_GET['mtime'];
$cache_file = CACHE_DIR . $site . '.js';
$cache_mtime = @filemtime($cache_file);

// we need to rebuild is the passed in mtime is newer than the cache file mtime
if ($mtime > $cache_mtime) {
  require 'jsmin-1.1.1.php';
  $scripts = '';
  foreach ($bundles[$site] as $file) {
    $contents = @file_get_contents(SCRIPT_DIR . $file);
    if ($str === false) {
      error_log("javascript.php: Error reading file '$file'");
    } else {
      $scripts .= $contents;
    }
  }
  $min_content = JSMin::minify($scripts);
  file_put_contents($cache_file, $min_content);
} else {
  $min_content = file_get_contents($cache_file);
}

header('Content-Type: text/javascript');
header('Expires: ' . gmdate('D, d M Y H:i:s', time()+365*24*3600) . ' GMT');
header('ETag: "' . md5($min_content) . '"');

if (is_buggy_IE()) {
  ob_start();
} else {
  ob_start('ob_gzhandler');
}

echo $min_content;

function is_buggy_IE() {
  $ret = false;
  $agent = $_SERVER['HTTP_USER_AGENT'];
  if (strpos($agent, 'Mozilla/4.0 (compatible; MSIE ') === 0 && strpos($ua, 'Opera') === false) {
    $version = floatval(substr($agent, 30));
    if ($version < 6) {
      $ret = true;
    } else if ($version == 6 && strpos($agent, 'SV1') === false) {
      $ret = true;
    }
  }
  return $ret;
}
?>

So that's it. The above is a fully-working, fully-tested solution for efficient delivery of Javascript that can make a profound difference to page loading times that doesn't sacrifice one of the great things about PHP: the ability to save a file, reload a Web page and see if what you've done has worked or not. There is no separate build step yet none of the caching functionality is compromised.

Feel free to use the code in any way you wish. Drop me a line or leave a comment if this was helpful to you or you find an issue or simply have a suggestion.

Supercharging Javascript, Part 5: Caching on the Client

Previous: Caching on the Server

This is already a well-established principle of high performance Web Sites. In this last section I am going to incorporate it into the script.

There are alternatives to doing it this way. Some use Apache modules like mod_expires or mod_headers. There is nothing wrong with that approach but for compacting Javascript like this script does, I think it's better done from the script so that's what we'll do.

For those unfamiliar with the concept, the idea is to put a far future Expires header on our Javascript. That way the client doesn't download the Javascript or check if there is a newer version on every page load.

This raises the question: what happens when you change the Javascript and you want the client to get the latest copy? Easy. You change the filename every time you want the client to download it. So firstly we need to modify our rewrite rule.

RewriteRule ^javascript/(\w+)\.(\d+)\.js$ /javascript.php?site=$1&mtime=$2 [L]

There are many techniques for doing this. Common alternatives include adding a dummy query string to the end of the URL. Now we just need to generate the right filename. So we will need a dynamically generated script tag. Instead of:

<script type="text/javascript" src="/javascript/site.js"></script>

we need

<?php
define('SCRIPT_DIR', $_SERVER['DOCUMENT_ROOT'] . '/script/');
define('SCRIPT_PATH', '/javascript/');

$bundles = array(
  'site' => array(
    'jQuery-1.3.2.js',
    'jquery.bgiframe.js',
    'jquery.dimensions.js',
    'supersubs.js',
    'superfish.js',
    'site.js',
  ),
);

function link_javascript($site) {
  global $bundles;
  if (!isset($bundles[$site])) {
    die("javascript.php: Unknown bundle '$site' requested");
  }
  $mtime = 0;
  foreach ($bundles[$site] as $file) {
    $file_mtime = filemtime(SCRIPT_DIR . $file);
    if ($file_mtime !== false && $file_mtime > $mtime) {
      $mtime = $file_mtime;
    }
  }
  return SCRIPT_PATH . $site . '.' . $mtime . '.js';
}
?>

and in our page:

<script type="text/javascript" src="<?php echo link_javascript('site') ?>"></script>

And finally the script that drives it all:

<?php
define('SCRIPT_DIR', $_SERVER['DOCUMENT_ROOT'] . '/script/');
define('CACHE_DIR', $_SERVER['DOCUMENT_ROOT'] . '/cache/');

$bundles = array(
  'site' => array(
    'jQuery-1.3.2.js',
    'jquery.bgiframe.js',
    'jquery.dimensions.js',
    'supersubs.js',
    'superfish.js',
    'site.js',
  ),
);

$site = $_GET['site'];
if (!isset($bundles[$site])) {
  error_log("javascript.php: Unknown bundle '$site' requested");
  exit;
}

$mtime = $_GET['mtime'];
$cache_file = CACHE_DIR . $site . '.js';
$cache_mtime = @filemtime($cache_file);

// we need to rebuild is the passed in mtime is newer than the cache file mtime
if ($mtime > $cache_mtime) {
  require 'jsmin-1.1.1.php';
  $scripts = '';
  foreach ($bundles[$site] as $file) {
    $contents = @file_get_contents(SCRIPT_DIR . $file);
    if ($str === false) {
      error_log("javascript.php: Error reading file '$file'");
    } else {
      $scripts .= $contents;
    }
  }
  $min_content = JSMin::minify($scripts);
  file_put_contents($cache_file, $min_content);
} else {
  $min_content = file_get_contents($cache_file);
}

header('Content-Type: text/javascript');
header('Expires: ' . gmdate('D, d M Y H:i:s', time()+365*24*3600) . ' GMT');
header('ETag: "' . md5($min_content) . '"');

ob_start('ob_gzhandler');

echo $min_content;
?>

The last thing we added was an ETag HTTP header. Because of everything else going on, this is essentially superfluous but, if nothing else, it will make YSlow happier. The expires header is set for one year in the future. This is strictly arbitrary and you can put anything you want there.

Of course, one little problem remains...

Next: The Internet Explorer Problem

Supercharging Javascript, Part 4: Caching on the Server

Previous: Minify Everything

So far we've developed a reasonably decent dynamic Javascript packing script. The problem is that the packing is done every request so we should do store the result. To do this our PHP application will need to be able to write the result to the filesystem. Typically for security reasons PHP scripts have no write permissions anyway except the temp directory. There are several issues that must be addressed when implementing this:

Caching in the temporary directory is inherently insecure. Other users can view and possibly modify your cache, presenting a big potential security hole. This is particularly an issue if you're using shared hosting where other sites will have the same access to your files that you do, which is already a well-known issue with PHP session security on shared hosting.

If you use shared hosting--or any environment where security from other users is a potential issue--I would advise you to err on the side of caution and not use this part of the script.

My preferred method is to use filemtime(). Basically you compare the last modified time of the most recently modified Javascript file to the last modified time of the cache file. If it's newer, the cache needs to be rebuilt.

<?php
// These no longer even need to be under the document root
define('SCRIPT_DIR', $_SERVER['DOCUMENT_ROOT'] . '/script/');

$bundles = array(
  'site' => array(
    'jQuery-1.3.2.js',
    'jquery.bgiframe.js',
    'jquery.dimensions.js',
    'supersubs.js',
    'superfish.js',
    'site.js',
  ),
);

$site = $_GET['site'];
if (!isset($bundles[$site])) {
  error_log("javascript.php: Unknown bundle '$site' requested");
  exit;
}

// determine if we need to rebuild the cache
$cache_file = CACHE_DIR . $site . '.js';
// Error is suppressed here because otherwise it'll send an error to the user and this is
// a valid case before the cache has initially been generated
$cache_mtime = @filemtime($cache_file);
$build_cache = false;
if ($cache_mtime === false) {
  $build_cache = true;
} else {
  $mtime = 0;
  foreach ($bundles[$site] as $file) {
    $file_mtime = filemtime($file);
    if ($file_mtime !== false && $file_mtime > $mtime) {
      $mtime = $file_mtime;
    }
  }
  if ($mtime > $cache_mtime) {
    $build_cache = true;
  }
}

// build the cache if required
header('Content-Type: text/javascript');
ob_start('ob_gzhandler');
if ($build_cache) {
  require 'jsmin-1.1.1.php';
  $scripts = '';
  foreach ($bundles[$site] as $file) {
    $contents = @file_get_contents(SCRIPT_DIR . $file);
    if ($str === false) {
      error_log("javascript.php: Error reading file '$file'");
    } else {
      $scripts .= $contents;
    }
  }
  $min_content = JSMin::minify($scripts);
  file_put_contents($cache_file, $min_content);
  echo $min_content;
} else {
  readfile($cache_file);
}
?>

Now we're getting somewhere. But we can do even better than this.

Next: Caching on the Client

Supercharging Javascript, Part 3: Minify Everything

Previous: GZip Everything

Minification is not a new idea. I've found it tends to get used in Java and .Net apps more just because they already have a build process--something that doesn't normally occur with PHP applications. The excellent and well-regarded YUI Compressor is a Java program that is designed to be used as a command line tool, typically in a build process.

There is good reason for this attitude: good minification can be a relatively expensive operation. It's not something you'd want to necessarily do every request.

Some might question the need for minification at all if the output is already gzipped. While it is true that gzipping will do some of the job of reducing the payload size, good minification will go beyond that to renaming variables and possibly even rewriting code sections that could be shortened.

Popular libraries like jQuery and plugins often come in a pre-minified form. It's fine to use these but you'll still need to minify the scripts you write. I prefer to use the unpacked/unminified versions for debugging purposes. It's easy to change a script such as this one to minify or not depending on the environment, the user or some other setting.

My tool of choice for this job is jsmin.php, a PHP port of Douglas Crockford's JSMin, released under the MIT license.

Our script then becomes:

<?php
require 'jsmin-1.1.1.php';

// These no longer even need to be under the document root
define('SCRIPT_DIR', $_SERVER['DOCUMENT_ROOT'] . '/script/');

$bundles = array(
  'site' => array(
    'jQuery-1.3.2.js',
    'jquery.bgiframe.js',
    'jquery.dimensions.js',
    'supersubs.js',
    'superfish.js',
    'site.js',
  ),
);

ob_start('ob_gzhandler');

$site = $_GET['site'];
if (!isset($bundles[$site])) {
  error_log("javascript.php: Unknown bundle '$site' requested");
  exit;
}
header('Content-Type: text/javascript');
$scripts = '';
foreach ($bundles[$site] as $file) {
  $contents = @file_get_contents(SCRIPT_DIR . $file);
  if ($str === false) {
    error_log("javascript.php: Error reading file '$file'");
  } else {
    $scripts .= $contents;
  }
}
echo JSMin::minify($scripts);
?>

This is, of course, concatenating and minifying on every request. We should remedy that next.

Next: Caching on the Server

Supercharging Javascript, Part 2: GZip Everything

Previous: Make as Few HTTP Requests as Possible

In these days of 50+ Mbit residential internet connections there is a tendency to not give a second thought to bandwidth but this is an error for four reasons:

  1. Not everyone is on a high speed connection. I still know (a few) people who use dialup;
  2. You or your client or employer are paying for your site's bandwidth;
  3. Latency is not (completely) inversely proportional to bandwidth. So it can still affect load time; and
  4. With the rise of mobile broadband through 3G and similar networks, data usage and load times are much more important than they are for landline connections.

Luckily PHP comes to the rescue and we only need to add one line to our script.

<?php
// These no longer even need to be under the document root
define('SCRIPT_DIR', $_SERVER['DOCUMENT_ROOT'] . '/script/');

$bundles = array(
  'site' => array(
    'jQuery-1.3.2.js',
    'jquery.bgiframe.js',
    'jquery.dimensions.js',
    'supersubs.js',
    'superfish.js',
    'site.js',
  ),
);

ob_start('ob_gzhandler');

$site = $_GET['site'];
if (!isset($bundles[$site])) {
  error_log("javascript.php: Unknown bundle '$site' requested");
  exit;
}
header('Content-Type: text/javascript');
foreach ($bundles[$site] as $file) {
  if (@readfile(SCRIPT_DIR . $file) === false) {
    error_log("javascript.php: Error reading file '$file'");
  }
}
?>

This turns on PHP output buffering and enables the GZip handler. This handler does all the hard work of determine if the client will accept gzip encoding, based on it's HTTP headers.

Next: Minify Everything

Supercharging Javascript, Part 1: Make as Few HTTP Requests as Possible

The more Javascript files you have, the slower your page will load. I've seen some big sites that have over 20 external Javascript files, which is bewildering. Ideally you should have exactly one. That doesn't mean you can't develop with multiple Javascript files. Throw in jQuery and a few plugins and you might be up to ten or more before you even put in your own. That's not a problem because we're only concerned with what gets sent to the client.

So the first optimization we can make is to use our dynamically generated Javascript to combine all our Javascript files into a single HTTP payload.

<?php
// These no longer even need to be under the document root
define('SCRIPT_DIR', $_SERVER['DOCUMENT_ROOT'] . '/script/');

$bundles = array(
  'site' => array(
    'jQuery-1.3.2.js',
    'jquery.bgiframe.js',
    'jquery.dimensions.js',
    'supersubs.js',
    'superfish.js',
    'site.js',
  ),
);

$site = $_GET['site'];
if (!isset($bundles[$site])) {
  error_log("javascript.php: Unknown bundle '$site' requested");
  exit;
}
header('Content-Type: text/javascript');
foreach ($bundles[$site] as $file) {
  if (@readfile(SCRIPT_DIR . $file) === false) {
    error_log("javascript.php: Error reading file '$file'");
  }
}
?>

This rudimentary version can make a big difference if you have a lot of files (which you probably do if you're using jQuery or YUI). The above is based on a trivial example that uses the superfish jQuery plugin with (required and optional) dependencies and results in approximately 140k of Javascript. But we can do much better.

Next: GZip Everything

Supercharging Javascript in PHP

I am an unapologetic stickler for speed when it comes to Web applications. A Website should be fast and responsive. The average attention span of a user is about eight seconds. One of the quickest and easiest ways to speed up your Website is by improving your Javascript delivery and usage. This is a HOWTO guide on getting the best Javascript performance you can.

If you haven't yet, install the Firefox plugins Firebug and YSlow. They're invaluable for profiling page loading performance

For these posts I'm going to use PHP running on Apache for my examples. It's a popular platform but of course not the only one. These principles apply equally well to any Web technology stack.

I'm also a big fan of URL rewriting. Enabling URL rewriting is as simple as uncommenting mod_rewrite from your Apache config file (typically httpd.conf). Many PHP stacks (eg XAMPP) have this enabled by default. Most if not all hosting providers will have this available on even their cheapest plans.

Over the years I've sometiems encountered issues with dynamically generated Javascript and CSS in certain browsers. As such--although not strictly necessary--I like to give dynamically generated Javascript and CSS the "correct" extensions of .js and .css with URL rewriting. So instead of:

<script src="/javascript.php"></script>

you're better off having a URL like "/javascript/site.js" mapping that to a PHP script. With Apache and mod_rewrite this can be as simple as putting the following lines into an .htaccess file in your document root.

RewriteEngine On
RewriteBase /
RewriteRule ^javascript/site\.js$ /javascript.php [L]

Or I prefer something like:

RewriteRule ^javascript/(\w+)\.js$ /javascript.php?site=$1 [L]

This allows you to create "bundles" of Javascript files. One part of your site might require YUI and another jQuery. I'm going to use this version below. >All of these posts will assume the above is being used.

My goal is to take well-established best practices and combine them into an easy-to-use PHP-oriented solution so you can easily do things the right way without loss in flexibility or power.

Everything Old is New Again

When I was studying computer science, one of my lecturers told me that everything old will be new again and he was right. The latter half of the twentieth century saw the transition from large mainframes that were shared because computing time was so expensive to small personal computers. And in the last few years we've seen rise to cloud computing, which is little more than buying time on large servers.

I started Web development in the mid 90s when Perl CGI scripts were cutting edge, HTML was transitioning from 2 to 3 and Netscape Navigator reigned supreme. Web development was a niche back then but there was still a debate--mostly in Usenet--about the right way to do things. A popular school of thought was then that HTML was semantic and shouldn't be used for graphical interfaces. Text browsers such as lynx were typically Exhibit A.

Fast forward a decade and it would be an understatement to say that the world--even the Web world--has changed. No longer a toy of geeks and technocrats it had gone mainstream. The argument over whether or not Web pages should be graphical has long since been consigned to the dustbin of history. But the semantic Web argument has once again reared is back with a vengeance.

Everything old is new again.

This time however the battleground is tables vs "pure" CSS. The argument goes that tables are semantic and it is "wrong" to use semantic elements for layout purposes as explained in tableless design.

Now I am first and foremost a pragmatist. Most of the time we are writing software because someone is paying us to. That could be our employer, client or shareholders. It doesn't matter. The point is that we are getting paid to deliver something. As a pragmatist my position is that our first responsibility is put those requirements and needs ahead of our own. Programmers are opinionated--arguably judgemental--creatures with yours truly being no exception. This can lead to a tendency to do things for reasons that can be ascribed as nothing other than religious. It's why at least some of us get caught up in pointless debates like ATI vs nVidia, Intel vs AMD, Nikon vs Canon, Windows vs Linux and so on.

Tables vs pure CSS is no exception.

I have several problems with the anti-table argument:

1. Tables are significantly more backwards compatible than CSS. Much as we might despise it, most of the time we still need to support IE6 so this is a real issue;

2. Vertical centering in CSS is hard. An often cited counterargument to this is Vertical Centering in CSS. Three layers of divs and relative+absolute+relative positioning? How exactly is this better than:

<table style="height: 400px;">
<tbody>
<tr>
  <td style="vertical-align: middle;">everything is vertically centered</td>
</tr>
</tbody>
</table>

3. Semantic meaning of HTML elements is, to a certain extent, subjective. The best example of this is where some Web developers will seek to replicate the superscript and subscript tags with CSS, viewing such tags as stylistic anachronisms like <center>. This attitude is severely misguided as best described in Beware CSS for Superscript/Subscript. If the <table> element had been named <grid> would we even be having this argument?

4. Many layout issues in CSS require a sacrifice of one or more of simplicity, flexibility or cross-browser support. I posed the question of Can you do this HTML layout without using tables? on StackOverflow and in three months I am yet to receive and adequate pure CSS solution for what is a trivial layout issue.

5. Floats are a poor substitute for side-by-side content to tables. Floats need a width defined, table cells don't. If the floats are collectively too wide to fit the width they will drop down to the next "line". This is rarely what you want. It is hard to get floats to fill the remaining space but trivial for table cells. Floats often necessitate the use of empty div tags with the CSS attribute clear: both to work.

Don't get me wrong: floats are great and certainly have their uses. A lot of the time they will be adequate to this task but it is undeniable that tables are more capable and compatible.

So I take the pragmatic view to favour pure CSS. If something can be done relatively easily in pure CSS and the solution is sufficiently robust and browser-compatible then I'll do it without hesitation. But as soon as find myself spending possibly hours trying to replicate simple table functionality--and lets face it, most Web developers have been there at some point--it's time to pull out Old Faithful, the HTML Table.

So please let's all take a collective deep breath and pause--at least momentarily--before starting a religious war the next time someone uses a table for layout.