Tuesday, December 20, 2011

Why Software Development is like Ironing a Thneed

 I’m being quite useful. This thing is a Thneed.
A Thneed's a Fine-Something-That-All-People-Need!
It's a shirt. It's a sock. It's a glove. It's a hat.
But it has OTHER uses. Yes, far beyond that.
You can use it for carpets. For pillows! For sheets!
Or curtains! Or covers for bicycle seats!

                                         – from “The Lorax” by Dr. Seuss

During my time in the military two decades ago I become highly skilled at ironing. And not only shirts and pants, but beds, hats, sheets and other items one would not normally consider “ironable”. I continue to this day to do my own ironing. I actually find it rather therapeutic.

Since having children I have also become reacquainted with the works of Theodor Geisel, more affectionately known as Dr. Seuss. It struck me the other day while ironing a particularly pesky shirt that software development is very much like ironing a Thneed.

So let’s make some assumptions about a Thneed based on the description above. Some poetic licence and imagination will be required.

  1. Nobody is entirely sure what a Thneed is, not even the manufacturer.
  2. When customers buy a Thneed they have only a vague idea of what they need it for and how it is going to make their lives better.
  3. No two Thneed’s are exactly alike; they are the snowflakes of garments.
  4. Thneed producers create new and improved Thneed’s all the time.
  5. A Thneed is too big and awkwardly shaped to fit on your ironing board and Dr. Seuss makes no mention of a Thneed-press.
  6. It is hard, if not impossible, to estimate how long it will take you or a highly trained team of Thneed-ironers to iron a Thneed, with any level of accuracy or confidence in your estimate.
  7. The market for Thneed-ironing accessories is confusing in its profusion, pace and super-competitiveness.
  8. Thneed-ironing methodologies were derived from shirt and pants ironing methodologies, but they are actually poorly suited.
  9. A Thneed is like most other garments in that it occasionally requires ironing.
  10. Thneed manufactures’ own Thneed’s are usually the worst ironed.

So based on the assumptions above, how does one go about ironing this Thneed thing? Well the trick is known by every person who has ever had to iron a shirt.

Manipulate the garment so that a small piece of it is flat on the ironing board and then iron that piece. Then get another piece flat and iron that piece, and so on and so on, until you have ironed the entire garment. If you are foolish enough to try to iron large sections of the garment at the beginning, you will become very frustrated and will probably run out of steam before you are done. You have to “divide and conquer” when it comes to ironing a Thneed; that is the winning strategy.

I don’t think that I need to actually spell out why this is like software development; if you have done any software development in your life you will know that I am right (or mostly so).

Maybe someday I will post “Why Architects are like the Lorax, and Users like the Once-ler”. On second thoughts maybe I won’t.

Happy Ironing!

Wednesday, December 14, 2011

Apache Hadoop on Windows Azure

The brilliant Alexander Stojanovic talking about his latest project, Isotope, which is Microsoft’s Hadoop distribution. Yes, you read that right; Microsoft is going to be packaging up HDFS, MapReduce, Flume, Sqoop, Hive, Pig, Pegasus, Mahout, Lucene and some new technology into a Hadoop distributed for the Windows and Windows Azure platforms.

Update (June 10th, 2012): It looks like Channel9 removed the video for some reason.

Monday, December 12, 2011

Ghost in the Wires

I just finished reading “Ghost in the Wire : My Adventures as the World’s Most Wanted Hacker” by Kevin Mitnick and William L. Simon. It reads like a spy thriller and I literally had a hard time putting it down. The stories of Kevin Mitnick’s social engineering exploits are truly amazing, and regardless of one’s ethical stance on hacking one has to respect his extraordinary audacity.

Given that Mitnick himself admits to being a master of social engineering, i.e. lying and manipulating people, I still cannot say for sure whether or not I believe that all his hacking was motivated by curiosity alone. But I don't think it matters really; I agree with Kevin and his supporters that he was treated rather poorly by the US Justice System.

In hindsight though, I can also totally understand why he was treated as harshly as he was; at the time they simply did not have the necessary understanding to ascertain just how big a threat he was, and so they had to assume the worst, and defer to sources who obviously had issues with him that went well beyond the morality and legality of his hacking activities. I also don’t think it mattered that he had apparently not used any of the access or data he obtained for nefarious purposes; it was simply the fact that he could have caused significant damage and loss if he so chose. It was just pure luck on the part of the targets of Kevin’s hacking that he was not malicious;  and I would hazard a guess that contemporary jurisprudence is not underpinned in any way by luck (though I am no expert in this area, so one never knows).       

This book is well worth the read and I highly recommend it for anyone interested in computer security. It shows that the weakest link in any system is unarguably always the human components, and that without strictly adhered-to policy that no system can be made secure, regardless of the size of the technology investment.

P.S. The book refers to a film that was made by Kevin’s supporters; it is called Freedom Downtime - The Story of Kevin Mitnick. It was obviously made on a shoestring budget, and apparently before hacking became a lucrative new line of business for organized crime world-wide, but it is worth watching. It also happens to be available in its entirety online, and I have embedded it below:

Friday, December 9, 2011

Processing.js Snowflake Fail

I probably should be writing a post about Microsoft’s SOA and BPM platforms, but I need a breather from that particular topic, so instead I am going to write about my recent frustrations with Processing.js. I was hoping to be able to create some flashy new sketches, but unfortunately my recent experiments have uncovered a critical bug in Processing.js that will only be fixed in the 1.5 release. 

My 3 year-old daughter can’t stop talking about snow so I decided to create a little snow generator for her and post it on this blog. I also wanted to experiment with Processing.js’ ability to load SVG files, which can then be used in a sketch. My idea was simple; create an SVG file that contains a number of shapes that can be randomly combined to create snowflake shapes. Then generate a collection of those random shapes and animate them. Not an ambitious project in the least.

My previous experiments with Processing were done with the stable 1.5.1 release of PDE. I thought I would try the latest alpha version of the Processing 2.0 PDE for this experiment, primarily because it has a JavaScript mode, and will export a web page that loads Processing.js and your sketch (and detect the necessary browser capabilities too!). It does not seem to provide an option to embed the sketch script directly in the HTML, so the sketch is always referenced as an external pde file.  It took me a couple of hours to create the SVG file in InkScape, and a sketch in PDE that did exactly what I wanted. While prototyping the sketch I was working in the Standard mode, i.e. it generates a Java applet, since that offers the best development-time performance.

When, after completing the sketch, I changed to the JavaScript mode my sketch failed to pretty much do anything other than draw the background gradient.

The original sketch looks like this when running:


Processing provides a loadShape method that takes the path or URL to an SVG file, parses the SVG, and generates Processing-native PShape objects. There is currently no way to load SVG elements that are embedded directly in the HTML. Hopefully this will come in a future version of Processing.js. Processing also provides a  getChild method to get shapes nested within the root PShape. PShapes can be drawn directly to the screen or drawn off-screen to a PGraphics object which can then be used at some later time to draw to the screen by calling the image method.

To generate my snowflakes I created an array of PGraphics objects (each with a little wrapper) and then drew random snowflakes to each. I also added some noise and toy physics to make the whole thing a little more realistic. It looked great in PDE.

Note: I initially was sorting the array from smallest to largest and then drawing them in that order, but after comparing the results I could not see a difference and simply omitted the sort. I had to write my own sort function because the sort implementation that is provided in Processing will only sort arrays of int, float and String. 

There was only one small problem; off-screen drawing of PShapes is broken in the current build of Processing.js. I have filed a bug and it looks like this will be fixed for the 1.5 release of Processing.js. So this post obviously does not include the running sketch.

Another Note: I tried using the tint method to modify the apparent brightness of the snowflakes based on their scale each time the snowflake was drawn, rather than explicitly adjusting the stroke color. This KILLED the performance even when it was running in the Standard mode on my quad-core 8GB laptop with hardware-accelerated graphics. Another bug perhaps?

Rather than find a work-around in Processing.js, I will probably try to port this sketch to one of the other JavaScript graphics APIs, like Raphaël for example. And of course I will post the running result and code in some future post on this blog.

Since I cant show the final result, here is my Processing code for the sketch (usual caveats and disclaimers apply):

color bkground = #000080;
color bkground2 = #0000FF;
snowFlakeFactory factory;
snowFlake[] flakes; 

void setup()
  size(600, 200);
  factory = new snowFlakeFactory("snowflakes.svg");
  flakes = factory.createFlakes(60);

class snowFlakeFactory
  color _background;
  PShape _template,

  snowFlakeFactory(String templateFileName)
    _template = loadShape(templateFileName);
    _spoke = _template.getChild("spoke");
    _centerHex = _template.getChild("centerHex");
    _centerCircle = _template.getChild("centerCircle");
    _star  = _template.getChild("star");
    _longArms = _template.getChild("longArms");
    _mediumArms = _template.getChild("mediumArms");
    _shortArms = _template.getChild("shortArms");
    _endCircle = _template.getChild("endCircle");

  snowFlake createFlake()
    snowFlake flake = new snowFlake();
    PGraphics graph = createGraphics(450, 450, P2D);
    float br = 4000 * flake._scale;
    graph.stroke(br, br, 255);
    radialDraw(graph,_spoke, 225, 225, 0);
    if (heads())
      if (heads())
        graph.shape(_centerHex, 225, 225);
        graph.shape(_centerCircle, 225, 225);
    if (heads()) graph.shape(_star, 225, 225);
    if (heads()) radialDraw(graph,_endCircle, 225, 225, 190 );
    PShape[] arms = {
      _longArms, _mediumArms, _longArms
    if (heads()) radialDraw(graph,arms[(int)random(0, 2)], 225, 225, 100);
    if (heads()) radialDraw(graph,arms[(int)random(0, 2)], 225, 225, 130);
    if (heads()) radialDraw(graph,arms[(int)random(0, 2)], 225, 225, 160);
    flake._image = graph;
    return flake;

  snowFlake[] createFlakes(int flakeCount)
    snowFlake[] flakes = new snowFlake[flakeCount];
    for (int i = 0; i < flakeCount; i++)
      flakes[i] = createFlake();
    return flakes;

  void radialDraw(PGraphics graph, PShape feature, float originX, float originY, float rad)
    float xOffset = rad * cos(PI/6);
    float yOffset = rad * sin(PI/6);
    graph.shape(feature, originX, originY + rad);
    graph.shape(feature, originX - xOffset, originY + yOffset);
    graph.shape(feature, originX - xOffset, originY - yOffset);
    graph.shape(feature, originX, originY - rad);
    graph.shape(feature, originX + xOffset, originY - yOffset);
    graph.shape(feature, originX + xOffset, originY + yOffset);

class snowFlake
  float _posX;
  float _posY;
  float _scale;
  float _rotation;
  PGraphics _image;

    _posX = random(width);
    _posY = random(height);
    _scale = heads()? random(0.01,0.03) : random(0.04,0.1);
    _rotation = random(0, PI/6);
    _image = null;

  void drawFlake()
      translate(_posX, _posY);
      image(_image, -225, -225);

void draw()
  for(int i = 0; i < flakes.length; i++)
    snowFlake flake = flakes[i];
    float gravity = flake._scale * (10 + random(0,5));
    float wind = flake._scale * (5 + random(-2,2));
    flake._posY += gravity; 
    flake._posX += wind; 
    flake._rotation += 0.01;
    if(flake._posY &rt; height + 20) flake._posY = -20;
    if(flake._posX &rt; width + 20) flake._posX = -20;

void drawBackroundGradient(color c1, color c2)
  for (int i = 0; i <= height; i++) {
    float inter = map(i, 0, height, 0, 1);
    color c = lerpColor(c1, c2, inter);
    line(0, i, width, i);

float _prob = 0.75;
boolean heads()
  float rand = random(0, 1);
  return (rand < _prob);

And here is the SVG:

<svg id="snowflakeTemplate" xmlns="http://www.w3.org/2000/svg" height="1000" width="700" version="1.1">
<g id="mainLayer" stroke="#000" stroke-miterlimit="4" stroke-dasharray="none" fill="none">
<path id="spoke" stroke-linejoin="round" d="M0-0,0,190" stroke-linecap="round" stroke-width="10"/>
<path id="centerHex" stroke-linejoin="miter" d="m-36.48-23.229,0,44.146,36.184,20.891,36.775-21.232,0-41.606-35.987-20.777z" stroke-linecap="butt" stroke-width="10"/>
<path id="star" stroke-linejoin="round" d="M53.858-31.273,49.838-86.48-0.015-62.659l-50.032-24.126-4.3663,55.188-45.979,31.37,45.814,31.287,4.3801,55.416,49.918-23.819,50.097,24.128,4.4815-55.112,46.094-31.294z" stroke-linecap="round" stroke-width="10"/>
<path id="longArms" stroke-linejoin="round" d="M49.142,11.646,0-11.646-49.142,11.559" stroke-linecap="round" stroke-width="10"/>
<path id="mediumArms" stroke-linejoin="round" d="M28.782,6.4858,0-6.4858-28.782,6.3986" stroke-linecap="round" stroke-width="10"/>
<path id="shortArms" stroke-linejoin="round" d="M-12.226,2.7043,0-2.7915,12.226,2.7915" stroke-linecap="round" stroke-width="10"/>
<circle id="endCircle" cx="0" cy="0" r="10" stroke-width="10"/>
<circle id="centerCircle" cx="0" cy="0" r="36.5" stroke-width="10"/>
<rect id="diamond" stroke-linejoin="round" transform="rotate(45)" height="8" width="8" stroke-linecap="round" y="-4" x="-4" stroke-width="10"/>

Friday, November 18, 2011

Simple L-system Processing.js Code

Here as promised is the code from my previous post:

<script src="http://processingjs.org/content/download/processing-js-1.3.6/processing-1.3.6.min.js" type="text/javascript"></script>
<script src="http://cdnjs.cloudflare.com/ajax/libs/modernizr/2.0.6/modernizr.min.js" type="text/javascript"></script>
<script src="http://cdnjs.cloudflare.com/ajax/libs/jquery/1.7/jquery.min.js" type="text/javascript"></script>
<div id="blogpostcontent"><!-- Blog Text -->
<div id="nocanvas" style="display: none; color: Red;"><!-- Fail Text -->
<div id="procanvas"><canvas id="processingCanvas"></canvas></div>
<div><!-- More Blog Text -->
<script type="text/javascript">
var words = $("#blogpostcontent").text().replace(/[\.,-\/#!$%\^&\*;:{}=\-_`~()]/g, "").replace(/\s{2,}/g, "").split(' ');
if (!Modernizr.canvas) { $('#nocanvas').show(); $('#procanvas').remove(); }
<script type="text/processing" data-processing-target="processingCanvas">

Tree tree;
int wordCount = 0;

class Stack {
ArrayList aList;

Stack() {
aList = new ArrayList(1024);

Stack(int initialSize) {
aList = new ArrayList(initialSize);

boolean isEmpty() {
if (aList.size() > 0) return false;
return true;

void push(Object obj) {

Object pop() {
int n = aList.size();
if (n > 0) return aList.remove(n - 1);
return null;

class Branch {

float x = 0;
float y = 0;
float theta = 0;
float thickness = 0;

Branch() {

Branch (Branch branch) {
x = branch.x;
y = branch.y;
theta = branch.theta;
thickness = branch.thickness;

class Tree {

String axiom, currentString;
String productionRule;
Branch branch;
Stack branchStack;
int pos = 0;
color col;
float angle = PI/10;
float angleChaos = 1;
float initialThickness = 18;
float thickness = initialThickness;

Tree () {
axiom = "F";
currentString = axiom;
branch = new Branch();
productionRule = "F[+FF-F-F][-FF+F-F]";
branchStack = new Stack();

void grow() {

void grow(int generations) {
for (int i = 0; i < generations; i++) {
String nextString = "";
for (int j = 0; j < currentString.length(); j++) {
char c = currentString.charAt(j);
if (c == "F") {
nextString += productionRule;
else {
nextString += c;
currentString = nextString;

void draw() {
for (int i = 0; i < currentString.length(); i++) {

void drawBranch () {
if (pos >= currentString.length()) return;
char c = currentString.charAt(pos);
switch (c) {
case "F":
fill(100 + random(-40, 40), 42 + random(-40, 40), 42);
String word = (wordCount < words.length - 1) ? words[wordCount++] : "noword";
branch.thickness = thickness;
textFont(createFont("Helvetica", branch.thickness));
translate(branch.x, branch.y);
text(word, 0, 0);
float extension = textWidth(word);
branch.x += extension * cos(branch.theta);
branch.y += extension * sin(branch.theta);
case "-":
branch.theta -= (angle + random(-1.0 * angle * angleChaos, angle * angleChaos));
case "+":
branch.theta += (angle + random(-1.0 * angle * angleChaos, angle * angleChaos));
case "[":
branchStack.push(new Branch(branch));
if (thickness > 3) {
thickness -= 3;
else {
thickness = 1;
case "]":
branch = (Branch)branchStack.pop();
thickness = branch.thickness;

void setup () {
size(500, 500);
tree = new Tree();

void draw () {
translate(width/2, height);
rotate(1.5 * PI);


The L-system code is based on this Processing code written by Daniel Jones.

A Simple L-system in Processing.js

Recently I attended TEDxVancouver 2011 and my favourite talk by far was by a brilliant generative artist named Jer Thorp. I have always been a fan of generative art, but have never been motivated to create any of my own. Jer's talk inspired me to play with one of the tools that he uses to create his works;Processing, which is a Java-like programming language, development environment and runtime, optimized for creating images, animations, simulations and interactive visuals.

Processing projects or "sketches" are typically packaged as self-contained Java applets and can be embedded in web pages or run stand-alone. This however requires that a JVM is installed and that the user has given Java applets permission to run in the browser, so I was very happy to hear that Processing has been ported to JavaScript, which allows Processing sketches to be embedded in, or referenced by HTML pages, and run directly by any HTML5-capable browser. The new API, called processing.js, makes use of the new HTML5 Canvas element, and WebGL for 3D rendering. Processing.js supports about 90% of the Processing language.

Processing.js provides three ways to reference a sketch in an HTML page:
  1. You can reference the file that contains the Processing source code in the same way you would reference an external JavaScript file.
  2. You can embed the Processing script directly in-line in the HTML, again in the same way you can embed JavaScript directly in an HTML file.
  3. You can use Processing.js as a pure JavaScript API.
In the first two cases Processing.js parses the Processing code and converts it to JavaScript. This obviously has performance implications, but it is very convenient to be able to prototype your sketches in the Processing IDE, and the DOM and other JavaScript embedded or referenced in the page is still accessible to your Processing code. If you want a richer development experience there is also a Processing plug-in for Eclipse, and the Processing API has also been ported to other languages including Ruby, Scala and Clojure.

For my initial project I decided to build a simple L-System, to simulate the growth of a plant and then progressively render the resulting simulation. A Lindenmayer System is a string-rewriting technique that can be used to simulate plant growth. I also wanted to experiment with having the Processing code interact with the DOM and other JavaScript in an HTML page, so I decided that I would wrap my experiment in a blog post and use the words in the blog as the "branches" of the plant.

It took me a few hours to implement an L-Systems in Processing, and then a few more to get my code working as embedded Processing.js code in a simple web page. I also spent a couple of hours researching how to deal with older browsers that do not support HTML5 and the Canvas element, but in the end I decided to simply fail elegantly (I hope) in the absence of support for the Canvas element, which I detect using Modernizr.

And then I tried to get it all working inside a Blogger blog post. This was probably the hardest, most frustrating and time-consuming part of the whole process. After a good four hours of finding the right CDN for each script, modifying script load orders, editing the blog template, and minifying my code, I got it all working. I have tested it on IE 6 through 8 (elegant fail), IE9 (success!) and a number of versions of Firefox, Safari and Chrome, but you never know;if it causes your browser to throw an exception, I apologise.

And here is the final result (which has probably completed animating while you read the post so go ahead and refresh the page):

Update (March 16th, 2012): I finally got the code in this post working again. The reason that it broke is that both the Blogger web editor and Windows Live Writer do some infuriating auto-formatting which breaks the inline Processing script! I broke the code by attempting to fix a typo using first one and then the other of the aforementioned tools. Two rounds of auto-meddling left the page totally broken. Fixing it was a pain in the arse, but here you go.

If anyone is interested I will post the code in a subsequent post. Everything is embedded in this very page, but finding it amongst all of the boilerplate/template markup is a pain in the arse.
I plan to make my L-System more complex and reflective of real plant growth so look out for more posts and more code.

Tuesday, November 8, 2011

All Your node Are Belong to IIS

I recently wrote a post in response to a criticism of node.js. The author’s major gripe was about the performance characteristics of node, which is essentially single-threaded. I asserted that you could probably address this by load-balancing across multiple instances of node on the same server. Well Tomasz Janczuk has developed a better solution, called iisnode, which allows you to host node in Internet Information Server and leverage some of IIS’ process management, security, logging, scalability and management capabilities.

Tomek has documented how to get iisnode up and running in his blog post titled Hosting node.js applications in IIS on Windows.

The prolific Scott Hanselman, who is the Kurt Vonnegut of software development bloggers, has a great post about iisnode titled Installing and Running node.js applications within IIS on Windows - Are you mad?. It provides a thorough overview of iisnode and also includes some interesting performance data.

Tuesday, October 18, 2011

Built-In Obsolesce

Here’s another of those thoughts that has nothing to do with software but that I think is interesting enough that I want to post it.

I am a huge fan of science fiction, and I just got done reading a brilliant book by one of my favourite authors, China Miéville, called “Embassytown”. Miéville does not disappoint with his latest work; mashing up New Weird and Space Opera to create a most thoroughly enjoyable yarn. I highly recommend it!

This story got me thinking about the cultural evolution of sentient species. The one concrete example I have leads me to believe that the same sentiency that allows these species to dominate their worlds, also all but guarantees that they will all eventually self-destruct in the rare case that they are not made extinct by some environmental catastrophe. Though their demise will probably be brought about by technology-accelerated runaway consumption (as my subject species is demonstrating), I suspect that there is a more subtle reason why their demise is inevitable: the species-wide nihilism that I assert is inevitable as the species unravels the mysteries of its own existence and the Universe.

As the human species begins (and I do assert that we are only in the paddling pool of self-discovery) to unravel the nature of their own existence and the Universe of which they are a infinitesimal part, one by one the things that they imagine are vitally important will become meaningless. For example, how can the significance of any individual’s hopes, dreams, aspirations, desires and beliefs, stand firm in the face of an understanding of the biological evolutionary process. I would assert that they cannot.

George Bernard Shaw wrote the following about Darwinism in the preface to Back to Methuselah:

“But when its whole significance dawns on you, your heart sinks into a heap of sand within you. There is a hideous fatalism about it, a ghastly and damnable reduction of beauty and intelligence, of strength and purpose, of honor and aspiration, to such casually picturesque changes as an avalanche may make in a mountain landscape, or a railway accident in a human figure.”

Note: Shaw wrote this as a criticism of Natural Selection; he was a Lamarckist.

Though there are still many who do not even believe in Evolution, let alone understand it well enough to come to this miserable and inevitable conclusion, it is an unstoppable meme; inevitably all of humanity will come to understand it, assuming we don’t self-immolate first of course. And when the entire species succumbs to this meme it will simply expire from collective nihilism. Perhaps this is why there has been, and continues to be such resistance to this so-obviously correct idea; on some level our genes “know” that this level of species-wide self-awareness is fatally dangerous. And Evolution is not the only dangerous idea that undermines the human condition.

I am a self-confessed Nihilist and Atheist so maybe I am just projecting, or this might just be my attempt to understand the Conservative and “Anti-science” worldview.

It should be kept in mind that I had a smile on my face the entire time I was writing this. I don’t take myself too seriously and neither should you.

Wednesday, October 12, 2011

The Argument from Vitriolic Invective

I was sent the following link after posting yesterday about node.js:


I think it is safe to say that Ted Dziuba thinks that event-driven systems in general, and node.js specifically, are bad tech. And he is particularly insistent that JavaScript has no place running on “The Server”. I am all too familiar with this particular song. While working as a Technical Evangelist for the .NET Framework, and then working as the Performance Program Manager for the CLR, I heard it played more times than I have heard “Elmo’s Song” played, and my kids are one and three, so you get the point.

When .NET came on the scene at the turn of the century I heard a cacophony of grumblings from the Java community about how it would never rival the performance of the [whichever] JVM or the breadth of the JDK and 3rd-party Java libraries, from the C/C++ community that its performance would suck so badly that it would be unusable for high-performance workloads, from the Visual Basic community that it would never replace good-old VB, and from the academic community that it would never be suitable for teaching or research. They all turned out to be mostly wrong. Yes, there are some workloads that still need C/C++ because of their extreme performance requirements, but for the vast majority of workloads well-written .NET Framework code holds its own (and not to mention the developer productivity gains!). And obviously the .NET Framework’s performance has improved over the decade so there remain few workloads which are beyond its capabilities. Of course anyone can write poorly performing code, but that is equally true for C++ as it is for Erlang, Scala, Java, C#, F# or TSQL.

So what’s my beef with Mr. Dziuba’s post?

Obviously he knows his proverbial stuff, but this post comes across as nothing more than a bitter rant, despite the fact that it has “math” in it. And his assertion that “threaded programming, … is easier to understand than callback driven programming” made me literally laugh out loud. Perhaps it is true for him, but for the vast majority of developers out there multi-threaded programming is a source of wide-eyed terror; the appropriately ominous words “deadlock”, “race condition”, “convoy”, “starvation” and “Heisenbug” come to mind. Perhaps he is correct about the performance characteristics of multi-threaded versus event-based systems, but in the end if node.js is good enough for most workloads, and is easier for developers to work with, then who gives a rat’s arse.

His final assertion that JavaScript in not appropriate on “The Server”, also made me laugh; as if the server is some sort of sacred ground not to be touched by the unwashed feet of a lowly scripting language. Node.js is based on the V8 JavaScript engine from Google, which compiles the JavaScript down to native code on first execution and has a few tricks up its sleeve to avoid the performance penalties associated with dynamic or “duck” typing. No, it’s not as fast as an equivalent C, C++ or x86 assembly program, but I don’t doubt that it will perform adequately for the majority of use cases. And JavaScript is not standing still either. Not satisfied with having the fastest JavaScript runtime, Google today announced an early preview of Dart; a new language that is based on JavaScript that, among many other language enhancements, addresses the performance limitations of the current incarnation of JavaScript. It will run as native code on the server or as compiled JavaScript in browsers that don’t support it natively, which is currently all of them including Chrome. Unfortunately V8 does not yet natively support Dart either (though I don’t doubt it soon will), and there are no binaries available, so if you want to play with it you are going to have to download the source and build it yourself.

Companies like Intel are also working on providing technology that addresses the performance issues with JavaScript. JavaScript is already the dominant client-side development language, and it looks like it may soon have a significant footprint on “The Server” too, despite Ted Dziuba’s strong objections.  

Tuesday, October 11, 2011

Architecting Simplicity

I am amazed at the plethora of products and technologies that are required to deliver a best-of-breed, leading-edge, enterprise-scale, line-of-business software system.

As an example, a system that I am currently working on has an architecture that uses the following products and technologies:

None of the aforementioned technologies are being used gratuitously; the architecture that aggregates all of the above is necessarily complex, given the requirements. A senior developer working on this project needs to understand all of these technologies and will be writing “code” in HTML, CSS, JavaScript, XML, C# and TSQL on a daily basis. That’s a lot of tech to wrap one’s head around. And that does not include understanding the domain “problems” that the system needs to solve, which are typically complex in their own right. And this amount of technology is not atypical for enterprise line-of-business application development.

Does software that solves complex problems really need to have so many moving parts? Isn’t Simplicity one of the core tenets of great software design? 

I recently had the opportunity to chat with Rob Boyes, a Technical Director at airG, about the technologies that they are using for their latest product and service offerings. airG is a leading mobile social entertainment provider based in Vancouver, and they have millions of users from across the planet using their software. Rob told me that, though in the past they have used the LAMP stack for their backend platform, they are now using node.js and mongoDB. Though I knew of the existence of mongoDB, I had to admit to Rob that I had not heard of node.js. Since I love nothing more than tinkering with new software technology, this conversation motivated me to do a little hands-on research into these technologies, and I have to say that I have been super-impressed; these two technologies are, in a word, “awesome”! And much of that awesomeness derives from their elegant simplicity.   

node.js, or just “node”, is a server-side JavaScript runtime based on Google’s V8 JavaScript Engine; the same JavaScript engine that is in Google Chrome. It includes built-in HTTP support (though it is not limited to the HTTP protocol for network IO).

Here is a very simple example of node JavaScript code:

var http = require('http');
http.createServer(function (request, response) {
   response.writeHead("Content-Type", "text/html");
   response.end('<html><body><h1>The Barbarian Programmer</h1></body></html>');
}).listen(8000, "");

Obviously being able to write JavaScript on the server gives web applications elegant symmetry, but node’s execution model is also very simple; the runtime, which will run on just about any modern operating system, runs user code on a single thread (though the runtime itself is multi-threaded). Request processing is non-blocking and is based on an asynchronous event/call-back model, so the entire server is super-scalable, and developers do not need to concern themselves with pernicious thread synchronization issues. There is also nothing to stop you from running multiple hardware-thread-affined instances of the node runtime on a single box  and using a load-balancer, probably also running on node, if you want to take advantage of multiple cores or CPUs for additional scalability and/or throughput. How to Node is a good place to learn about all things node.  

There are also a lot of additional JavaScript libraries for node, including Connect and express, which further simplify the development of web applications. There is also a great package manager available for node called npm, which makes installing these libraries dead easy. It is currently “experimental” on Windows.

mongoDB is a super-scalable “document-oriented” database, which natively supports the storage and retrieval of JSON(ish) documents. This makes it the perfect choice for use with node.js. A node.js driver is available for mongoDB, and drivers are also available for just about every platform under the sun, including the .NET Framework. mongoDB binaries are available for Windows, Linux, OS X and Solaris, and since it is open source, so is the source code. You can install the node.js drivers using npm.

When node.js and mongoDB are combined with HTML50, CSS3.0, JQuery and client-side JavaScript they represent a super-scalable, simple, consistent, and powerful web application platform. And it’s JavaScript (and JSON) all the way down! Obviously these technologies are not going to be suitable for every type of application, but I will most definitely be looking for opportunities to use them in upcoming systems that I design. Perhaps you should too.

Mad props to Rob, for reminding me that there are indeed new things under the sun.

Friday, September 16, 2011

Ten Principles of Good Design Redux | Fin

My previous post was the last in my “Ten Principles of Good Software Design” series. I think these principles are beautiful in their simplicity, and profound in the universality. I thoroughly enjoyed writing these posts and I am grateful to Herr Rams for providing me with the inspiration to do so.

Here again are his principles from his own mouth:

Good Software Design is Honest

Ten Principles of Good Design Redux | Part 6

The Software Industry has been lying to itself and its customers since its emergence. At some point in the early evolution of the Software Industry someone must have noticed that the emerging development lifecycle model, originally proposed described by Winston Royce and now known as the “Waterfall” or “Big Design Up Front” model, was better suited to Aeronautical Engineering than it was to software development. Designing and building software is not the same as designing and building aircraft.

[Update (2011-10-07): Winston Royce did not propose Waterfall; he was the first to formally describe it, and used it as an example of how not to do software development. He was the “someone” I was referring to in the previous paragraph. Though this error makes my assertion that Aeronautical Engineering is better suited to the Waterfall Model seem a little arbitrary, it does not invalidate the point of the post. Mea culpa for poor fact checking.]

I am not so naïve that I would suggest that they are different merely because of the complexity of the problem domain or because of the amount of uncertainty innate to the process. Though I can only imagine that having the Laws of Physics as a significant source of constraints must reduce the uncertainty somewhat in the design and implementation of an aircraft, I would assert that they are most different in the degrees to which they are subject to the more capricious aspects of Human Nature.

When an aircraft manufacturer designs a new aircraft they typically know precisely what the majority of the requirements are from the outset, e.g. carry this many Wi-Fi-connected, grumpy adults and screaming infants from this point on the globe to this other point, using this much fuel, and oh, don’t crash. The tolerances and constraints are well understood; the laws of physics are essentially immutable, and you can only “comfortably” cram so many humans into a given space (though as someone who is over six foot and flies regularly I have to say that I am not so convinced of my last point).

The aircraft manufacturer can plug all these requirements and constraints into a model and calculate whether or not, given existing and emerging technology, they can feasibly build the aircraft, or if and where the invention of new technology is going to be required. Much of the problem domain is known and is relatively constant, i.e. the Laws of Physics and the current state of the Material Sciences. They also know what they don’t know, and they can mostly quantify the risk\cost associated with that uncertainty thanks to good historical data. A lifecycle model that is dominated by a discrete design phase is clearly effective in bringing a new aircraft to market, though obviously prototyping and innovation are also required during this initial phase. The cost associated with this protracted design phase is accepted as necessary by the, now mature, aviation industry, probably because the cost of failure is so high.

So why is Software development different? One of the most significant differences between Aeronautical and Software Engineering is that users’ of software products and systems are very rarely able to articulate detailed requirements at the outset. The constraints and requirements have to be extracted out of the minds of the intended users, in the best case, or out of the mind of one or more analysts who thinks they understand the users’ requirements, in the worst. And though the Laws of Physics are at play in the hardware that the software runs on, they almost never have to be considered as constraints in the design of the software. Typically the requirements gathering process is a bootstrapping exercise that continues well into the actual development of the system. Users have to have usable examples of what they don’t want to lead them to understand what they actually want, and the engineers have to attempt to solve some of the known hard technical problems in order to reveal the initially-unknown, usually harder, ones.

And it is impossible to estimate how long it is going to take to extract the real requirements and then develop the software to meet those requirements, at the beginning of the project. That is not to say that a waterfall model, that included exhaustive prototyping during the design phase, would not work for software, but it would require that everyone acknowledge that the requirements gathering phase would need to be completed before a fixed-cost time and effort estimate of the development could be provided, and that the duration of that initial phase could only be roughly estimated. There are simply more unknown unknowns in Software Development than in classical Aeronautical Engineering. I say “classical” because modern aircraft probably require as much Software Engineering as they do Aeronautical Engineering.

Another difference is that because so much of the software that powered the growth of the industry was developed by under- or un-paid geeks, it has created a mass hallucination about how much [paid for] time and effort it actually takes to develop high-quality software. Everyone has now come to accept this hallucination despite the number of software projects it has caused to run over time and budget, or to fail completely.

But thankfully all of this is changing; Agile software development practices are an attempt to address this innate dishonesty, and acknowledge that we as software developers simply don’t know what we don’t know. And though Agile has become mainstream, there are still those who doggedly cling to the old dishonest and delusional ways.

It is every software architect’s and developer’s responsibility to promote and champion Agile as an honest approach to the design and development of great software, particularly in the face of grumblings from old-school project managers and customers who want the illusion of fixed risk.

Note: I could write an entire book on the topic above, and was well on the way to doing so before I reminded myself that this was just a blog post destined to be read by 10s of my friends. I am sure though that the above makes the point I was trying to make.

Friday, September 9, 2011

Good Software Design is as Little Design as Possible

Ten Principles of Good Design Redux | Part 10

I think it was the eminent Rico Mariani who coined the term ”OOPoholic” to describe a software engineer who is addicted to adding gratuitous indirection and complexity to their code in the name of Object Oriented Design. After doing many code and architecture reviews I now develop a speech impediment every time I hear the word “facade”.

Einstein is credited with saying "Everything should be made as simple as possible, but not simpler." I think this holds especially true for software. Design patterns were originally proposed to make designing and describing designs easier, not harder. I propose the following razor:

If using a particular design pattern makes it harder to describe the the overall design to ones grandmother then one probably shouldn’t be using it.

Note: Replacing “grandmother” with “Project Manager” or “Client” in the aforementioned, does not reduce its utility in any way.

Friday, September 2, 2011

Good Software Design is Environmentally Friendly

Ten Principles of Good Design Redux | Part 9

This is the principle that I have taken the most liberty with. Rams’ original meaning was self evident; the manufacturing processes and materials used to realise a design should be environmentally sustainable and generally “Green”. I would agree that software should be designed in such a way that it uses hardware, and thus energy, efficiently; but I think there is a much broader application for this principle.

In a recent post I wrote about Gestalt Driven Development and Gestalt Driven Architecture. I defined Software Architecture as “the process of designing and developing software in such a way that it will remain in harmony with the significant contexts within which it is created and runs over its entire lifetime.” The “significant contexts” are ostensibly the software’s environment, and it is this environment that should be the software’s BFF.

Thursday, September 1, 2011

Good Software Design is Thorough Down to the Last Detail

Ten Principles of Good Design Redux | Part 8

This principle speaks to one of my pet peeves; Software Architects are first and foremost Software Engineers, and therefore need to be able to map any high level design they create to at least one feasible concrete implementation. Some people seem to believe that once you become a Software Architect that you are excused from understanding the technology all the way down to the last turtle. I have witnessed too many once-technical architects design solutions that are impractical or inappropriate, because they have lost touch with the underlying technologies.

Wednesday, August 31, 2011

Good Software Design is Long-Lasting

Ten Principles of Good Design Redux | Part 7

Note: This post is technically out-of-order, but if modern microprocessors can do it then so can I.

I very recently designed a system to replace a “legacy” application that has been running for over 25 years. I have no doubt that this is not unusual and that there are still millions, if not billions, of lines of code still running that are well into their 20s. One would imagine that we learned our lesson with the whole Y2K nastiness, or simply by observing just how long some code manages to stay running.

Unfortunately I have seen time and time again how ostensibly competent Software Architects tacitly design software systems for a limited life-span. And it is the clients of these architects who have to bare the costs, since by the time the software starts to display the symptoms of its accelerated decrepitude they are usually heavily locked into the software, and have to do significant surgery to make critically-needed changes. Typically these changes add significant technical debt to the system and make subsequent changes harder still.

This can all be avoided if one designs the software with the assumption that it might live for a quarter of a century or longer. Of course you may be given a specific lifespan as a formal requirement. It is up to you to decide just how much salt to take those requirements with given your understanding of the client or domain. The case might also be made that designing relatively short-lived software is a way to guarantee your own job security. That may hold true in some cases but I would assert that if you are morally so flexible then you should be in politics rather than software development.

Tuesday, August 23, 2011

Good Software Design is Unobtrusive

Ten Principles of Good Design Redux | Part 5

If I were to apply this principle as it was originally stated by Rams, it would contradict the assertion that I made in an earlier post that a software design needs to be a work of art. Ram’s suggests that products are “like tools and are neither decorative objects nor works of art”. I wholeheartedly agree that Form should always follow Function in software design, but I also believe that there is a genetic component to aesthetic that is useful in establishing the quality and suitability of a software design. 

One would naturally assume that the form follows function principle would be universally adhered to by all software developers and architects. And that is generally a safe assumption. It is not however safe to assume that this form-pursued function is merely the set of formal requirements for the system; often the system is also designed to address the personal requirements of the designer.   

Many Software Architects are rabidly righteous; “Tiger Architects” defending their favoured technologies, design patterns and methodologies with ferocity and zeal, even in cases where those memes are obviously not optimal for a design. Despite the fact that Software is a memeplex on the leading edge of memetic evolution, the aforementioned phenomena is mostly a manifestation of  significantly baser stuff; mammalian territoriality and human ego (though the former may be considered the precursor of the latter).

I have been guilty of this in the past, but over the years I have learned to look at my own software designs through a deconstructivist lens; separating the fundamental requirements of the system from those that I have introduced because of my own biases, preferences and comfort zones. That is not to say that I simply reject all parts of the design that display evidence of the aforementioned, but I attempt to make sure that none of these personal requirements subsumes any of the fundamental requirements of the system. One might say that I practice Deconstructive Software Architecture, though that might be misapprehended if it is confused with the similarly-named movement in meatspace architecture. 

I am motivated to add the following rider to this principle:

Good Software Design is Egoless.

Friday, August 19, 2011

You can teach an old dog new tricks!

I always tell people that if you cut me I bleed pure Microsoft. That is not to say that I am under the illusion that Microsoft products and technologies are the best tools for any problem, they are just the tools that I am most familiar with. I did spend nearly a decade working for Microsoft after all, and had a hand in building some of those very tools, so one would assume that my skill set would be a bit skewed in that direction. So it is not a surprise that I have managed to learn almost nothing about the Oracle Technology Stack (though I did once have to use the Oracle database in a solution).

Until this week that is.

This week I had the opportunity to spend two days with a few technical folks from Oracle and I have to say it was great fun and very enlightening.

In preparation for my meeting I did a little reading on the Oracle Technology Stack (if you can call it a “stack”) and was befuddled by the array of products and technologies that it includes. Oracle is The Borg of the software industry; it has been on an, apparently insane, shopping spree for the last few years, buying up and assimilating company after company, including PeopleSoft, Siebel Systems, BEA Systems and Sun Microsystems, all of whom had significant technology stacks in their own right. No doubt Oracle invested a fortune to integrate and unify these stacks but after looking at the Oracle Marchitecture I walked into the meeting with fairly low expectations.

I was very pleasantly surprised and impressed.

The Oracle stack is mature, fully-featured,  highly-integrated, and supported by rich, comprehensive tools. I was highly impressed with JDeveloper, Oracles flagship development environment, and the unified development experience it offers, particularly for applications targeting the Oracle SOA Suite, which is part of the Oracle Fusion Middleware platform. It allows for the creation of SOA Composite Applications that can include web services (in their broadest sense), BEPL and BPMN work flows, Business Rules, POJOs and a bunch of other capabilities, all supported by intuitive visual designers. The entire application can be developed with JDeveloper and then published directly to the Application Server.

The other technology that I was particularly impressed with is the Oracle Policy Automation (OPA) tool suite. It provides tools for modelling and runtime evaluation of complex Business Rules. The capability that is most impressive is that it allows you to take a policy document that was written in Microsoft Word and transform that document in-place into executable rules. It also supports authoring of rules in Excel. You can then host and evaluate those rules on a web-service-accessible server or you can host the evaluation engine directly in your Java or .NET application. Given that Microsoft does not have a technology to rival OPA I will definitely consider using it in future .NET solutions that require a rich stand-alone Business Rules Engine. Yes, I know Microsoft also has rule engines in BizTalk and Workflow Foundation (though not in version 4.0 for some reason!), but the OPA rules authoring experience leaves both of these technologies in the dust.

I have only just begun to scratch the surface but  I am really looking forward to learning more about the Oracle Technology Stack (I never thought I would ever hear myself say those words!). 

Now all I have to do is work out how I can use Scala instead of Java as a development language in JDeveloper.