Monday, August 8, 2011

Good Software Design Makes a Product Understandable

Ten Principles of Good Design Redux | Part 4

This principle is applicable in two areas; firstly to User Interface Design and User Experience Design, and secondly to the Conceptual, Logical and Physical Designs of a software system. In the case of the latter the comprehensibility of a software system design is so obviously vital to the successful realisation of that design that I will focus this post entirely on the former. 
 
The ease with which a user comes to understand the visual metaphors utilized in a user interface, and thus the speed with which they can accomplish necessary tasks and operations, defines how usable that user interface is. From a user experience perspective “understandable” and “usable” are synonymous.
 
In 1995 I worked for a progressive IT company who were in the process of setting up a full usability testing lab. The plan for this lab called for the installation of cameras for recording video of the user’s facial and body language, a key and mouse logger to monitor and capture user input data, a pulse monitor to measure the user’s stress levels, and a high resolution screen-capture of the user’s interaction with the application itself. The plan even called for having statisticians and clinical psychologists on staff to structure the usability tests and interpret the integrated data streams. The goal was to not only test the usability of clients applications, but also gain a deep understanding of the Laws of Usability. I left the company before the plan was realized, to go and work on a new shiny thingy called the “World Wide Web”, so unfortunately I never got to see the lab completed, but it did introduce me to Usability as a sub-domain of Software Design. Even in 1995 the importance of Software Usability and Interaction Design, and their significant impact on the successful development and adoption of software, was well understood, though the Laws themselves had not yet fully emerged.
 
In the mid 2000s, while at Microsoft,  I was fortunate enough to interact with some of the brilliant User Experience folks in the Windows Product Group and Microsoft Research. They were on the very bleeding edge of User Interface and Experience Design. Despite all of the UI innovation that happens in these teams, Microsoft has to be conservative in its adoption of these UI innovations because of the very broad use of their client operating systems; I saw many mind-blowing UI prototypes during that time, and I have seen a small percentage of the elements of those prototypes show up in shipping Microsoft products. Working with these teams gave me some idea of just how far the User Experience Design discipline had come in a relatively short time; Usability had gone from being a Craft to a fully-fledged Science, with its own sub-disciplines, laws and formalisms.
 
The Web has had a massive impact on our understanding of what makes a user interface usable and understandable. Because of their rapid mutation and evolution in response to users explicit and tacit feedback, web pages are to Usability Engineers what fruit flies are to Geneticists. User interface and experience design on the web has been highly experimental because of the breadth of its accessibility and the relative simplicity of web technologies. It has brought about a “democratization of [UI] design”. I am continually amazed at the brilliant UI that I see online.    
 
A couple of years back I read a great book that further formalised the way I approach User Interface design. It is called “Super Crunchers: Why Thinking-by-Numbers Is the New Way to Be Smart” . It obviously is not a UI Design book; It is actually about the hidden knowledge in the Exabytes of digital data that we generate, which can be revealed by applying various statistical analysis techniques. There is a great chapter about the use of randomized testing to improve the usability of web sites. The study documented in the book suggests that there are in fact no Laws of Usability, and that there is no such a thing as a user interface that will be usable and comprehensible by all users. It further suggests that the key to a usable interface is adaptability and customizability.
 
Ideally a user interface should adapt or be adaptable to each individual user. A user should not have to learn new visual and interaction metaphors to get a task done. “Fortunately”, the software industry has forced users to learn a lot of metaphors, e.g The Desktop, The Hyperlink, The Tree View, etc., which can now be used relatively freely, though we should certainly not feel obliged to use them. If possible a user interface should simply adapt without requiring that the user explicitly set a number of configuration options. At a minimum a user interface should give the user a simple mechanism for customizing the user interface to suit their preferences or preferred interaction “style”.
 
An example of an excellent user interface is Microsoft’s Visual Studio 2010. For a product with so many features and capabilities, I find it incredibly easy to use and fully configure to my personal preferences. Obviously this is not a general purpose product, but for those who use it, i.e. software geeks, it is easy to understand and is highly usable (except for the occasional performance issue). I have used Eclipse in anger (pun entirely intended) over the course of the last decade and it offers a good Usability counterpoint to Visual Studio 2010. 
 
So one would image, given what we already know about designing easily understood and usable user interfaces and experiences, that all modern user interfaces would be highly usable. Sadly that is not the case; I still see product after product, solution after solution, and web site after web site, designed so poorly that only a Visual Basic programmer from 1993 could love them! There really isn’t any excuse.

Friday, August 5, 2011

Replicator Wars | Consciousness as War Machine

On occasion I have ideas that have nothing to do with Software Development, but that I think still merit being written down. This morning while riding the train reading “The Selfish Gene” by Richard Dawkins I had just such an idea.
 
I was reading the final chapter of the original version, which introduces the “Meme” meme. This got me thinking about the tension between genes and memes. This tension exists because an individual can be “infected” by a meme that entirely subverts gene-driven behaviour, e.g. celibacy. A meme can ostensibly override a gene or gene-complex that has been millions of years in the making. This begs the question; if memes are potentially such a significant threat to genes then why have genes not evolved one or multiple defenses against them?
 
And then I had an idea; perhaps Consciousness itself arose as a genetic evolutionary response to the infectious, self-replicating and potentially destructive nature of memes.
 
I would assume that a brain able to replicate memes arose by random mutation, or for some other evolutionary purpose, but as soon as memes were able to replicate from individual to individual with some degree of fidelity, the game was afoot. And possibly this happened before consciousness evolved. A meme is after all just a tool, and there are species besides Homo Sapiens that are tool makers and users. Once memes became a reality the possibility existed that memes could override gene-driven behaviour. And one would have to imagine that, through simple Darwinian evolutionary processes, those same genes would then start to evolve countermeasures to those memes, since the genes of an individual infected with a given “toxic” meme would not survive into subsequent generations. So perhaps instead of evolving specialised adaptations to specific memes, they evolved a handful of generalised ones. And perhaps Consciousness was one of those adaptations.
 
But surely consciousness actually increases our susceptibility to meme infection? I would assert not. The ability to pass ideas from one individual to another does not seem to require consciousness; meme’s do not seem to need self-awareness to replicate, as is demonstrated by other species who pass epigenetic “knowledge” from generation to generation.
 
I assert that Consciousness actually helps prevent meme infection because it gives an individual the ability to filter and analyse memes, and then if necessary immunize themselves against that and similar memes. It also gives the individual the ability to root out destructive memes which they have already been infected by, and similarly immunize themselves against future infections. This would mean that there are genes that are evolving to make us more self-aware, and generally smarter.
 
Perhaps the development of Consciousness was not the only response to the potentially destructive nature of memes; one would assume that genes would also evolve that hinder the ability to process and transfer memes. Perhaps there are also retrograde genes in us that are evolving to turn us back into wild beasts, incapable of self-reflection, poetry, art, science, philosophy and Sudoku.
 
So there may be two genetic responses to the Meme Threat in the human gene pool; one that is making us smarter and one that is turning us into soulless meat puppets. 
 
I think I finally understand the difference between Democrats and Republicans. I kid, the Republicans!

Tuesday, August 2, 2011

C++ : The Resurrection

I have always had a love-hate relationship with C++; it gives you almost total control over the hardware but requires that you be very mindful of that hardware. It has been a very long time since I wrote any C++ code in anger, and until yesterday I was convinced that I would never write another line of C++ in my life. Then I read an article in the most recent MSDN Magazine on Windows API development with the latest version of the C++ standard, whose working title has been “C++0x”, and will officially be known as “C++2011”. I have been aware that there was a new reversion of the standard in the works, but I had no idea how significant a step forward this latest revision is going to be. Lambda Functions, Type Inference and a “foreach” equivalent  in C++? Who are you and what have you done with C++?!?

And there is a lot more to the new standard, which is now final, and should be published some time this Summer. I found this Channel9 interview with Herb Sutter; he gives a very good overview of the new features of C++2011, the process that brought it into being, and what it means for developers targeting the Windows Platform.

I think I am going to have to give C++ another chance, though I don’t know what this is going to mean for the other languages that I have been using recently; F#, IronPython and Scala? I guess we will just have to wait and see.

Thursday, July 28, 2011

Good Software Design is Aesthetic

Ten Principles of Good Design Redux | Part 3

I usually get rather testy when software developers insist that Software Development and Architecture are more Art than Science. The testiness is usually exacerbated by the poor quality of the code that is generally written by these very same software developers. If you need to do a depth-first traversal of a binary tree it is extremely unlikely that you will need to invent a new algorithm for doing so. Somewhere in the Noosphere you will undoubtedly find an exhaustive academic study on optimal algorithms for tree-traversal. I have lost track of the times that I have had to scold developers for attempting to implement their own crypto or thread-synchronization primitives.

Is traditional Meatspace Architecture an Art or a Science? It is unarguably a creative discipline, but it is fundamentally underpinned by the material sciences and psychology*. Buildings that do not respect the laws of physics don’t stand very long, regardless of how beautiful they are. Meatspace Architecture is a creative process that is governed by a set of predominantly invariant constraints. It is true that new materials are constantly being invented, which alter those constraints, but I don’t imagine gravity is going away any time soon (until we start living in space anyway). I believe that Software Architecture (in the small and large) is no different from its Meatspace namesake.

So now that I have pooh-poohed Software Development and Architecture as Art I am going to do some legerdemain, some prestidigitation, and apparently contradict myself. Great Software Architecture is Art! A great software design, either in the small or large, can, and must in my opinion, be a thing of beauty. As a Software Architect you must be able to show a design to a colleague and have them say “Wow, that is awesome!”; “awesome” being equivalent to “beautiful” in High-Geek.  

Is it not true that budding artists of all types spend years and years learning, and in some rare cases developing, formal techniques? Only when they have truly mastered these techniques do they go on to create truly magnificent works of art, channelling their raw creativity and talent through a set of deeply inculcated techniques. Yes, yes, I know there are some great works of art painted by cats, but they are the exceptions rather than the rule.

And there is great hidden value in creating aesthetically pleasing software designs. The human brain is genetically tuned to recognise certain levels of complexity and other naturally occurring patterns as either aesthetically pleasing or not depending on their utility. By ensuring that your designs are aesthetically pleasing you are tapping into a pattern recognition system 2.5 million years in the making. An aesthetically pleasing software design is going to be more understandable, more enjoyable to implement and maintain, and most definitely easier to sell to a client or your management.

Quod erat demonstrandum (I hope anyway).

* Historically psychology has been considered a pseudo-science but I think folks like Steven Pinker and his ilk are dragging it kicking and screaming into the realm of “pure” Science.

Wednesday, July 27, 2011

Good Software Design Makes a Product Useful

Ten Principles of Good Design Redux | Part 2

Of course software makes a product useful! Software is useful by design!” I hear you say. But is that really always the case?

At the beginning of this year I received a Blackberry smartphone from my employer. It was not one of the current-generation touchscreen devices, and I can’t speak directly to how useful those devices are, but the device that I received was not useful in the least. The combination of the form factor, the trackball input device and the very poorly designed user interface rendered the device practically useless to me. Despite the fact that this was a paid-for smartphone, with both voice and data covered, it sat in my bag unused for months. I eventually just sent it back and decided to cover my own communication costs rather than be subjected to the Blackberry user experience. I never even bothered to find out the model number, but it looked something like this.

blackberry

I do know people who swear by their RIM devices, but I think RIM’s recent poor performance speaks in part to their inability to design and ship useful devices; this is the company who shipped a great tablet platform hobbled by its dependency on a Blackberry smartphone to enable its PIM features. And the sad thing is, there was a time when they were thought to be the embodiment of  “useful”; I recall getting a Blackberry pager when I first joined Microsoft Corp. in 2000. It looked like this.

BB850

It became the most useful piece of technology I owned. So what happened between then and now? The form factor of the phone that I found unusable in 2011 was the logical evolution of the pager that I found indispensible in 2000.

 

This happened.

iphone

And the iPhone reset the “useful” bar. Dieter Rams is a huge Apple fan, and so am I. I want to quote the description of Rams’ principle as it appears on Wikipedia because it so precisely describes Apple’s apparent approach to designing useful products.

A product is bought to be used. It has to satisfy certain criteria, not only functional, but also psychological and aesthetic. Good design emphasises the usefulness of a product whilst disregarding anything that could possibly detract from it.

When I use any feature of my iPhone I can visualize the team of designers and engineers brutally redesigning and refactoring that feature to make it as useful to the user as possible. Though RIM’s devices are functional, they were clearly not designed to be beautiful and give their users a deeply satisfying visceral experience. And though there are things about the iPhone that bug me, for example the lack of ability to “skin” the user interface, they do not detract from the overall experience, because it is just so bloody great!

As an aside, I recently saw Microsoft’s new mobile operating system, codenamed “Mango”, running in the wild. It is beautiful. That is not a word that I use to describe Microsoft products very often, but it is totally apropos in this case. Though the predominantly monochromatic tile user interface metaphor is very bold, it is offset by a number of subtle visual elements and the Metro typography, which makes the whole experience highly pleasing, usable and ultimately useful. There is definitely a Windows Phone in my future.

I think Rams’ “useful” principle is applicable to far more than just User Interface and User Experience Design; Software Architects should deeply consider the usefulness of the technology that they design and build, even if that software has no user interface to speak of. Software should ultimately always be designed with the actual living and breathing humans who are going to be affected by it, not just use it, in mind. This is the core principle of Gestalt Driven Development.

Good Software Design is Innovative

Ten Principles of Good Design Redux | Part 1

If I were to propose a software system design based on a 40-year-old model for distributed computing, i.e. a classic 2-tiered Client/Server Model, would that be a good design, even if I used current technologies, e.g. the .NET Framework 4 or the latest JDK?

The case can be made that innovation is motivated by a company’s financial and competitive umwelt. That is definitely true, but I would assert that this is an effect rather than a cause. I believe that humans are driven by an innate instinct to improve and optimize our tools to maximize our survivability as a species. That is not to say that all innovation always moves us forward; some innovation can be positively retrogressive in the context of the survivability of our species. On the whole though I do believe that innovation moves us, in the large, to a better place. And innovation in software technology is no exception.

So to answer my earlier question; there might be some very niche cases where a design based on the classic 2-tiered Client/Server Model might represent a good design, but in general I would assert that it would not. Having tested the model thoroughly over the last 40 years we have discovered its obvious, and even subtle, flaws and weaknesses and have innovated new and improved models to address those shortcomings. The case can also be made that the problems that we are trying to solve have themselves significantly evolved and mutated, and that the old models have limited to no utility in light of these transmogrifications.

One of the most glaring limitations of the classic 2-tiered Client/Server model is that software designed using this model does not age well; because of the typically high degree of coupling between the domain and presentation logic, it becomes more and more difficult to modify the application as it ages, and it makes replacement of the user interface technology almost impossible. The model seems to have a innate “drag” that accelerates the accrual of technical debt over time, inevitably leading to the technical bankruptcy of the system or application.

To be clear, I do not espouse innovation for innovation sake, or using emerging software technologies just because they are new. I do espouse keeping one lazy eye on the bleeding edge of innovation, and always considering how these emerging technologies might simplify or improve an application or system.

Ten Principles of Good Design Redux

A few weeks back I wrote a teaser about Dieter Rams’ “ten principles of good design” and mentioned that I believe these principles are very applicable to Software Architecture. I did not however elaborate on how these principles are applicable. I hope to address that with a number of posts over the next few weeks, each covering one principle. I will endeavour to not deviate too far from Mr. Rams’ original definitions, given their recursive elegance, but in some cases I will need to invoke Poetic Licence.

Wednesday, July 13, 2011

Now This is Visual Debugging!

Yesterday I installed a novel Visual Studio debugging tool from Microsoft Research and Brown University called Debugger Canvas. I started playing with it and was immediately hooked; now this is visual debugging! Debugger Canvas augments the default debugging experience in VS and gives you an interactive visual graph showing the call stack stack and code path. Each node in the tree contains the source code for each function. It even respects your font and layout preferences!

As you step through the code, the active line is highlighted as usual, and “Step Over” and “Step Into” also work as you would expect; if you step over a call, a new node is not created. Once a function has been stepped into it remains on the canvas even when the call returns, building up a visual representation of the code path. All the regular VS debugging windows and features work as expected, e.g hovering over a variable will show its current value.

Another cool feature is the ability to take snapshots of the value of a variable and then optionally display that on the canvas as a visual history. You can also annotate the canvas with “sticky” notes. The canvas itself is a file so you can save it for later use or you can save the file as an XPS document. I can imagine that test engineers will love this; it will allow them to reproduce a defect and then capture the full code path, pertinent variable history and their annotations, and then attach the resulting document to the bug in their defect management system.

There are a bunch of other features, including a “trial feature” that allows you to edit the code directly on the canvas (though apparently the code is a little buggy still). If during a debugging session you tire of Debugger Canvas you can just open a source file and continue debugging using the default debugging capabilities of VS.

I hope Debugger Canvas makes it into the next release of VS. You can download Debugger Canvas from Microsoft's DevLabs site.

Tuesday, July 12, 2011

A Love Affair with F# : Embracing a Functional Style

I grew up drinking the Object Oriented Kool-Aid, which imbues the drinker with the idea that functional programming languages are arcana whose use and utility are limited to academia and programming language research.

Despite this I remained vaguely curious about functional programing (probably because I secretly suffer from “academic envy”), and every so often would do some informal research on cool functional programming language constructs; closures, continuations, monads, list comprehension, higher-order functions and pattern matching being good examples. I was eager to use the aforementioned in anger but I never had what I thought was the “killer app” for a predominantly functional language.

Note: I say “predominantly functional” because many modern functional programming languages are not purely functional. For example F# is a best-of-breed functional programming language, with object oriented features. In comparison C# is a best-of-breed object oriented language that has functional programming language features. F# and C# seem to be slowly converging.

Another reason that I did not make a serious foray into functional programming was the lack of good tools. I tried my hand at Standard ML, but since I had been so spoiled (in a good way) by Visual Studio, I could not bring myself to go back to a generic text editor and the command line. I was also not happy about the prospect of having to learn yet another set of APIs.

In 2005 Microsoft Research released F# 1.0 into the wild. F# is in the ML family of languages, so the little knowledge I had acquired playing with SML was applicable, and F# is a native .NET language, so I was able to leverage all of my existing knowledge of the .NET Framework. Initially the Visual Studio integration was very limited, but improved steadily with subsequent releases of F# and Visual Studio. When Visual Studio 2010 was released along with F# 2.0, F# finally became a bona fide 1st class .NET language. So with the release of Visual Studio 2010 I officially ran out of excuses for not learning and using functional programming.

I had played with F# before the VS 2010 release but my code always landed up looking very C#-like, with ref and mutable keywords all over the place. I miserably failed to embrace a functional style, and therefore missed the real magic offered by functional programming.    

I recently wrote what I would consider my first “real” functional program using F#. I decided to port an InfoPath/SharePoint/Excel(VBA) application to .NET and provide a number of enhancements in the process. I also thought it was a good opportunity to use functional programming in anger, so I decided to write a couple of the modules in F# and embrace a functional style from the outset. This meant writing the code as a hierarchy of functions, avoiding side effects and mutable types unless necessitated by the use of existing non-functional .NET APIs, and making as much possible use of F# types, e.g. Lists, Tuples and Records, and F# language constructs, e.g. Active Patterns.

One of the modules that I wrote was a deserializer for InfoPath XML forms, which uses Partial Active Patterns to parse the XML. I will describe the implementation of the deserializer in detail in a future post. 

I am not sure when it was that I achieved Functional Programming Satori; perhaps it was around the thousandth line of F# code; but it was profound and left me with a lingering love for F# and functional programming in general. There is something about functional code that is just so bloody elegant! I excitedly showed my F# code to a colleague, not because I wanted him to review it for correctness, but because I wanted him to appreciate the aesthetics of it! And that aesthetic appeal did not come at the price of performance; the resulting code performed as well as an imperative C# implementation that I had written as a benchmark.

I highly recommend learning and using F# to all software engineers actively developing on the .NET platform. For those developers who have mastered LINQ they will find that many of the concepts they have learned will translate directly to F#. You can find everything you need to know to get started at the Microsoft F# Developer Center.

For those developers who are developing in Java I recommend taking a look at Scala. It is a general purpose programming language that includes many functional programming elements, and targets the Java Virtual Machine. Scala is what Java might have become if it had not been hobbled by politics.

Be warned though, once you have learned functional programming you may never want to go back!

Thursday, July 7, 2011

Ten Principles of Good Design

A few years ago I learned about Dieter Rams ten principles of “good design” and was immediately struck by how applicable these principles are to Software Architecture. I recently watched an amazing documentary film by Gary Hustwit called “Objectified” which is about “our complex relationship with manufactured objects and, by extension, the people who design them”.

This clip from the film, in which Dieter Rams talks about his design principles, and a recent discussion I was part of concerning whether one should optimize for making a product “perfect” or for meeting committed ship dates, reminded me of how rarely products, including software, are designed using these principles. This film should be required viewing for all Software Architects.

Note: Gary Hustwit made another excellent documentary film called “Helvetica” which is a must see for anyone doing user interface design.