Friday, November 18, 2011

A Simple L-system in Processing.js

Recently I attended TEDxVancouver 2011 and my favourite talk by far was by a brilliant generative artist named Jer Thorp. I have always been a fan of generative art, but have never been motivated to create any of my own. Jer's talk inspired me to play with one of the tools that he uses to create his works;Processing, which is a Java-like programming language, development environment and runtime, optimized for creating images, animations, simulations and interactive visuals.

Processing projects or "sketches" are typically packaged as self-contained Java applets and can be embedded in web pages or run stand-alone. This however requires that a JVM is installed and that the user has given Java applets permission to run in the browser, so I was very happy to hear that Processing has been ported to JavaScript, which allows Processing sketches to be embedded in, or referenced by HTML pages, and run directly by any HTML5-capable browser. The new API, called processing.js, makes use of the new HTML5 Canvas element, and WebGL for 3D rendering. Processing.js supports about 90% of the Processing language.

Processing.js provides three ways to reference a sketch in an HTML page:
  1. You can reference the file that contains the Processing source code in the same way you would reference an external JavaScript file.
  2. You can embed the Processing script directly in-line in the HTML, again in the same way you can embed JavaScript directly in an HTML file.
  3. You can use Processing.js as a pure JavaScript API.
In the first two cases Processing.js parses the Processing code and converts it to JavaScript. This obviously has performance implications, but it is very convenient to be able to prototype your sketches in the Processing IDE, and the DOM and other JavaScript embedded or referenced in the page is still accessible to your Processing code. If you want a richer development experience there is also a Processing plug-in for Eclipse, and the Processing API has also been ported to other languages including Ruby, Scala and Clojure.

For my initial project I decided to build a simple L-System, to simulate the growth of a plant and then progressively render the resulting simulation. A Lindenmayer System is a string-rewriting technique that can be used to simulate plant growth. I also wanted to experiment with having the Processing code interact with the DOM and other JavaScript in an HTML page, so I decided that I would wrap my experiment in a blog post and use the words in the blog as the "branches" of the plant.

It took me a few hours to implement an L-Systems in Processing, and then a few more to get my code working as embedded Processing.js code in a simple web page. I also spent a couple of hours researching how to deal with older browsers that do not support HTML5 and the Canvas element, but in the end I decided to simply fail elegantly (I hope) in the absence of support for the Canvas element, which I detect using Modernizr.

And then I tried to get it all working inside a Blogger blog post. This was probably the hardest, most frustrating and time-consuming part of the whole process. After a good four hours of finding the right CDN for each script, modifying script load orders, editing the blog template, and minifying my code, I got it all working. I have tested it on IE 6 through 8 (elegant fail), IE9 (success!) and a number of versions of Firefox, Safari and Chrome, but you never know;if it causes your browser to throw an exception, I apologise.

And here is the final result (which has probably completed animating while you read the post so go ahead and refresh the page):

Update (March 16th, 2012): I finally got the code in this post working again. The reason that it broke is that both the Blogger web editor and Windows Live Writer do some infuriating auto-formatting which breaks the inline Processing script! I broke the code by attempting to fix a typo using first one and then the other of the aforementioned tools. Two rounds of auto-meddling left the page totally broken. Fixing it was a pain in the arse, but here you go.


If anyone is interested I will post the code in a subsequent post. Everything is embedded in this very page, but finding it amongst all of the boilerplate/template markup is a pain in the arse.
I plan to make my L-System more complex and reflective of real plant growth so look out for more posts and more code.

Tuesday, November 8, 2011

All Your node Are Belong to IIS

I recently wrote a post in response to a criticism of node.js. The author’s major gripe was about the performance characteristics of node, which is essentially single-threaded. I asserted that you could probably address this by load-balancing across multiple instances of node on the same server. Well Tomasz Janczuk has developed a better solution, called iisnode, which allows you to host node in Internet Information Server and leverage some of IIS’ process management, security, logging, scalability and management capabilities.

Tomek has documented how to get iisnode up and running in his blog post titled Hosting node.js applications in IIS on Windows.

The prolific Scott Hanselman, who is the Kurt Vonnegut of software development bloggers, has a great post about iisnode titled Installing and Running node.js applications within IIS on Windows - Are you mad?. It provides a thorough overview of iisnode and also includes some interesting performance data.

Tuesday, October 18, 2011

Built-In Obsolesce

Here’s another of those thoughts that has nothing to do with software but that I think is interesting enough that I want to post it.

I am a huge fan of science fiction, and I just got done reading a brilliant book by one of my favourite authors, China MiĆ©ville, called “Embassytown”. MiĆ©ville does not disappoint with his latest work; mashing up New Weird and Space Opera to create a most thoroughly enjoyable yarn. I highly recommend it!

This story got me thinking about the cultural evolution of sentient species. The one concrete example I have leads me to believe that the same sentiency that allows these species to dominate their worlds, also all but guarantees that they will all eventually self-destruct in the rare case that they are not made extinct by some environmental catastrophe. Though their demise will probably be brought about by technology-accelerated runaway consumption (as my subject species is demonstrating), I suspect that there is a more subtle reason why their demise is inevitable: the species-wide nihilism that I assert is inevitable as the species unravels the mysteries of its own existence and the Universe.

As the human species begins (and I do assert that we are only in the paddling pool of self-discovery) to unravel the nature of their own existence and the Universe of which they are a infinitesimal part, one by one the things that they imagine are vitally important will become meaningless. For example, how can the significance of any individual’s hopes, dreams, aspirations, desires and beliefs, stand firm in the face of an understanding of the biological evolutionary process. I would assert that they cannot.

George Bernard Shaw wrote the following about Darwinism in the preface to Back to Methuselah:

“But when its whole significance dawns on you, your heart sinks into a heap of sand within you. There is a hideous fatalism about it, a ghastly and damnable reduction of beauty and intelligence, of strength and purpose, of honor and aspiration, to such casually picturesque changes as an avalanche may make in a mountain landscape, or a railway accident in a human figure.”

Note: Shaw wrote this as a criticism of Natural Selection; he was a Lamarckist.

Though there are still many who do not even believe in Evolution, let alone understand it well enough to come to this miserable and inevitable conclusion, it is an unstoppable meme; inevitably all of humanity will come to understand it, assuming we don’t self-immolate first of course. And when the entire species succumbs to this meme it will simply expire from collective nihilism. Perhaps this is why there has been, and continues to be such resistance to this so-obviously correct idea; on some level our genes “know” that this level of species-wide self-awareness is fatally dangerous. And Evolution is not the only dangerous idea that undermines the human condition.

I am a self-confessed Nihilist and Atheist so maybe I am just projecting, or this might just be my attempt to understand the Conservative and “Anti-science” worldview.

It should be kept in mind that I had a smile on my face the entire time I was writing this. I don’t take myself too seriously and neither should you.

Wednesday, October 12, 2011

The Argument from Vitriolic Invective

I was sent the following link after posting yesterday about node.js:

http://www.teddziuba.com/2011/10/straight-talk-on-event-loops.html

I think it is safe to say that Ted Dziuba thinks that event-driven systems in general, and node.js specifically, are bad tech. And he is particularly insistent that JavaScript has no place running on “The Server”. I am all too familiar with this particular song. While working as a Technical Evangelist for the .NET Framework, and then working as the Performance Program Manager for the CLR, I heard it played more times than I have heard “Elmo’s Song” played, and my kids are one and three, so you get the point.

When .NET came on the scene at the turn of the century I heard a cacophony of grumblings from the Java community about how it would never rival the performance of the [whichever] JVM or the breadth of the JDK and 3rd-party Java libraries, from the C/C++ community that its performance would suck so badly that it would be unusable for high-performance workloads, from the Visual Basic community that it would never replace good-old VB, and from the academic community that it would never be suitable for teaching or research. They all turned out to be mostly wrong. Yes, there are some workloads that still need C/C++ because of their extreme performance requirements, but for the vast majority of workloads well-written .NET Framework code holds its own (and not to mention the developer productivity gains!). And obviously the .NET Framework’s performance has improved over the decade so there remain few workloads which are beyond its capabilities. Of course anyone can write poorly performing code, but that is equally true for C++ as it is for Erlang, Scala, Java, C#, F# or TSQL.

So what’s my beef with Mr. Dziuba’s post?

Obviously he knows his proverbial stuff, but this post comes across as nothing more than a bitter rant, despite the fact that it has “math” in it. And his assertion that “threaded programming, … is easier to understand than callback driven programming” made me literally laugh out loud. Perhaps it is true for him, but for the vast majority of developers out there multi-threaded programming is a source of wide-eyed terror; the appropriately ominous words “deadlock”, “race condition”, “convoy”, “starvation” and “Heisenbug” come to mind. Perhaps he is correct about the performance characteristics of multi-threaded versus event-based systems, but in the end if node.js is good enough for most workloads, and is easier for developers to work with, then who gives a rat’s arse.

His final assertion that JavaScript in not appropriate on “The Server”, also made me laugh; as if the server is some sort of sacred ground not to be touched by the unwashed feet of a lowly scripting language. Node.js is based on the V8 JavaScript engine from Google, which compiles the JavaScript down to native code on first execution and has a few tricks up its sleeve to avoid the performance penalties associated with dynamic or “duck” typing. No, it’s not as fast as an equivalent C, C++ or x86 assembly program, but I don’t doubt that it will perform adequately for the majority of use cases. And JavaScript is not standing still either. Not satisfied with having the fastest JavaScript runtime, Google today announced an early preview of Dart; a new language that is based on JavaScript that, among many other language enhancements, addresses the performance limitations of the current incarnation of JavaScript. It will run as native code on the server or as compiled JavaScript in browsers that don’t support it natively, which is currently all of them including Chrome. Unfortunately V8 does not yet natively support Dart either (though I don’t doubt it soon will), and there are no binaries available, so if you want to play with it you are going to have to download the source and build it yourself.

Companies like Intel are also working on providing technology that addresses the performance issues with JavaScript. JavaScript is already the dominant client-side development language, and it looks like it may soon have a significant footprint on “The Server” too, despite Ted Dziuba’s strong objections.  

Tuesday, October 11, 2011

Architecting Simplicity

I am amazed at the plethora of products and technologies that are required to deliver a best-of-breed, leading-edge, enterprise-scale, line-of-business software system.

As an example, a system that I am currently working on has an architecture that uses the following products and technologies:

None of the aforementioned technologies are being used gratuitously; the architecture that aggregates all of the above is necessarily complex, given the requirements. A senior developer working on this project needs to understand all of these technologies and will be writing “code” in HTML, CSS, JavaScript, XML, C# and TSQL on a daily basis. That’s a lot of tech to wrap one’s head around. And that does not include understanding the domain “problems” that the system needs to solve, which are typically complex in their own right. And this amount of technology is not atypical for enterprise line-of-business application development.

Does software that solves complex problems really need to have so many moving parts? Isn’t Simplicity one of the core tenets of great software design? 

I recently had the opportunity to chat with Rob Boyes, a Technical Director at airG, about the technologies that they are using for their latest product and service offerings. airG is a leading mobile social entertainment provider based in Vancouver, and they have millions of users from across the planet using their software. Rob told me that, though in the past they have used the LAMP stack for their backend platform, they are now using node.js and mongoDB. Though I knew of the existence of mongoDB, I had to admit to Rob that I had not heard of node.js. Since I love nothing more than tinkering with new software technology, this conversation motivated me to do a little hands-on research into these technologies, and I have to say that I have been super-impressed; these two technologies are, in a word, “awesome”! And much of that awesomeness derives from their elegant simplicity.   

node.js, or just “node”, is a server-side JavaScript runtime based on Google’s V8 JavaScript Engine; the same JavaScript engine that is in Google Chrome. It includes built-in HTTP support (though it is not limited to the HTTP protocol for network IO).

Here is a very simple example of node JavaScript code:

var http = require('http');
http.createServer(function (request, response) {
   response.writeHead("Content-Type", "text/html");
   response.end('<html><body><h1>The Barbarian Programmer</h1></body></html>');
}).listen(8000, "127.0.0.1");

Obviously being able to write JavaScript on the server gives web applications elegant symmetry, but node’s execution model is also very simple; the runtime, which will run on just about any modern operating system, runs user code on a single thread (though the runtime itself is multi-threaded). Request processing is non-blocking and is based on an asynchronous event/call-back model, so the entire server is super-scalable, and developers do not need to concern themselves with pernicious thread synchronization issues. There is also nothing to stop you from running multiple hardware-thread-affined instances of the node runtime on a single box  and using a load-balancer, probably also running on node, if you want to take advantage of multiple cores or CPUs for additional scalability and/or throughput. How to Node is a good place to learn about all things node.  

There are also a lot of additional JavaScript libraries for node, including Connect and express, which further simplify the development of web applications. There is also a great package manager available for node called npm, which makes installing these libraries dead easy. It is currently “experimental” on Windows.

mongoDB is a super-scalable “document-oriented” database, which natively supports the storage and retrieval of JSON(ish) documents. This makes it the perfect choice for use with node.js. A node.js driver is available for mongoDB, and drivers are also available for just about every platform under the sun, including the .NET Framework. mongoDB binaries are available for Windows, Linux, OS X and Solaris, and since it is open source, so is the source code. You can install the node.js drivers using npm.

When node.js and mongoDB are combined with HTML50, CSS3.0, JQuery and client-side JavaScript they represent a super-scalable, simple, consistent, and powerful web application platform. And it’s JavaScript (and JSON) all the way down! Obviously these technologies are not going to be suitable for every type of application, but I will most definitely be looking for opportunities to use them in upcoming systems that I design. Perhaps you should too.

Mad props to Rob, for reminding me that there are indeed new things under the sun.

Friday, September 16, 2011

Ten Principles of Good Design Redux | Fin

My previous post was the last in my “Ten Principles of Good Software Design” series. I think these principles are beautiful in their simplicity, and profound in the universality. I thoroughly enjoyed writing these posts and I am grateful to Herr Rams for providing me with the inspiration to do so.

Here again are his principles from his own mouth:

Good Software Design is Honest

Ten Principles of Good Design Redux | Part 6

The Software Industry has been lying to itself and its customers since its emergence. At some point in the early evolution of the Software Industry someone must have noticed that the emerging development lifecycle model, originally proposed described by Winston Royce and now known as the “Waterfall” or “Big Design Up Front” model, was better suited to Aeronautical Engineering than it was to software development. Designing and building software is not the same as designing and building aircraft.

[Update (2011-10-07): Winston Royce did not propose Waterfall; he was the first to formally describe it, and used it as an example of how not to do software development. He was the “someone” I was referring to in the previous paragraph. Though this error makes my assertion that Aeronautical Engineering is better suited to the Waterfall Model seem a little arbitrary, it does not invalidate the point of the post. Mea culpa for poor fact checking.]

I am not so naĆÆve that I would suggest that they are different merely because of the complexity of the problem domain or because of the amount of uncertainty innate to the process. Though I can only imagine that having the Laws of Physics as a significant source of constraints must reduce the uncertainty somewhat in the design and implementation of an aircraft, I would assert that they are most different in the degrees to which they are subject to the more capricious aspects of Human Nature.

When an aircraft manufacturer designs a new aircraft they typically know precisely what the majority of the requirements are from the outset, e.g. carry this many Wi-Fi-connected, grumpy adults and screaming infants from this point on the globe to this other point, using this much fuel, and oh, don’t crash. The tolerances and constraints are well understood; the laws of physics are essentially immutable, and you can only “comfortably” cram so many humans into a given space (though as someone who is over six foot and flies regularly I have to say that I am not so convinced of my last point).

The aircraft manufacturer can plug all these requirements and constraints into a model and calculate whether or not, given existing and emerging technology, they can feasibly build the aircraft, or if and where the invention of new technology is going to be required. Much of the problem domain is known and is relatively constant, i.e. the Laws of Physics and the current state of the Material Sciences. They also know what they don’t know, and they can mostly quantify the risk\cost associated with that uncertainty thanks to good historical data. A lifecycle model that is dominated by a discrete design phase is clearly effective in bringing a new aircraft to market, though obviously prototyping and innovation are also required during this initial phase. The cost associated with this protracted design phase is accepted as necessary by the, now mature, aviation industry, probably because the cost of failure is so high.

So why is Software development different? One of the most significant differences between Aeronautical and Software Engineering is that users’ of software products and systems are very rarely able to articulate detailed requirements at the outset. The constraints and requirements have to be extracted out of the minds of the intended users, in the best case, or out of the mind of one or more analysts who thinks they understand the users’ requirements, in the worst. And though the Laws of Physics are at play in the hardware that the software runs on, they almost never have to be considered as constraints in the design of the software. Typically the requirements gathering process is a bootstrapping exercise that continues well into the actual development of the system. Users have to have usable examples of what they don’t want to lead them to understand what they actually want, and the engineers have to attempt to solve some of the known hard technical problems in order to reveal the initially-unknown, usually harder, ones.

And it is impossible to estimate how long it is going to take to extract the real requirements and then develop the software to meet those requirements, at the beginning of the project. That is not to say that a waterfall model, that included exhaustive prototyping during the design phase, would not work for software, but it would require that everyone acknowledge that the requirements gathering phase would need to be completed before a fixed-cost time and effort estimate of the development could be provided, and that the duration of that initial phase could only be roughly estimated. There are simply more unknown unknowns in Software Development than in classical Aeronautical Engineering. I say “classical” because modern aircraft probably require as much Software Engineering as they do Aeronautical Engineering.

Another difference is that because so much of the software that powered the growth of the industry was developed by under- or un-paid geeks, it has created a mass hallucination about how much [paid for] time and effort it actually takes to develop high-quality software. Everyone has now come to accept this hallucination despite the number of software projects it has caused to run over time and budget, or to fail completely.

But thankfully all of this is changing; Agile software development practices are an attempt to address this innate dishonesty, and acknowledge that we as software developers simply don’t know what we don’t know. And though Agile has become mainstream, there are still those who doggedly cling to the old dishonest and delusional ways.

It is every software architect’s and developer’s responsibility to promote and champion Agile as an honest approach to the design and development of great software, particularly in the face of grumblings from old-school project managers and customers who want the illusion of fixed risk.

Note: I could write an entire book on the topic above, and was well on the way to doing so before I reminded myself that this was just a blog post destined to be read by 10s of my friends. I am sure though that the above makes the point I was trying to make.

Friday, September 9, 2011

Good Software Design is as Little Design as Possible

Ten Principles of Good Design Redux | Part 10

I think it was the eminent Rico Mariani who coined the term ”OOPoholic” to describe a software engineer who is addicted to adding gratuitous indirection and complexity to their code in the name of Object Oriented Design. After doing many code and architecture reviews I now develop a speech impediment every time I hear the word “facade”.

Einstein is credited with saying "Everything should be made as simple as possible, but not simpler." I think this holds especially true for software. Design patterns were originally proposed to make designing and describing designs easier, not harder. I propose the following razor:

If using a particular design pattern makes it harder to describe the the overall design to ones grandmother then one probably shouldn’t be using it.

Note: Replacing “grandmother” with “Project Manager” or “Client” in the aforementioned, does not reduce its utility in any way.

Friday, September 2, 2011

Good Software Design is Environmentally Friendly

Ten Principles of Good Design Redux | Part 9

This is the principle that I have taken the most liberty with. Rams’ original meaning was self evident; the manufacturing processes and materials used to realise a design should be environmentally sustainable and generally “Green”. I would agree that software should be designed in such a way that it uses hardware, and thus energy, efficiently; but I think there is a much broader application for this principle.

In a recent post I wrote about Gestalt Driven Development and Gestalt Driven Architecture. I defined Software Architecture as “the process of designing and developing software in such a way that it will remain in harmony with the significant contexts within which it is created and runs over its entire lifetime.” The “significant contexts” are ostensibly the software’s environment, and it is this environment that should be the software’s BFF.

Thursday, September 1, 2011

Good Software Design is Thorough Down to the Last Detail

Ten Principles of Good Design Redux | Part 8

This principle speaks to one of my pet peeves; Software Architects are first and foremost Software Engineers, and therefore need to be able to map any high level design they create to at least one feasible concrete implementation. Some people seem to believe that once you become a Software Architect that you are excused from understanding the technology all the way down to the last turtle. I have witnessed too many once-technical architects design solutions that are impractical or inappropriate, because they have lost touch with the underlying technologies.