Sunday, December 9, 2012

Trapped in the Knowledge Value Network

In an effort to drive innovative thinking in my engineering team I have had them read "The Innovators Dilemma" by Clayton M. Christensen. One of the key concepts in the book is how Value Network’s prevent otherwise successful companies from initiating or responding to Disruptive Innovation

Though this term is overloaded, Christensen defines a Value Network as:

"The collection of upstream suppliers, downstream channels to market, and ancillary providers that support a common business model within an industry. When would-be disruptors enter into existing value networks, they must adapt their business models to conform to the value network and therefore fail that disruption because they become co-opted."

Value Networks are inherently self-reinforcing and  cause often-catastrophic organizational inertia. Counter-intuitively, the more successful an organization is (in the traditional sense anyway), the more valuable the network is, and the greater the inertial effect that network has on the organization’s ability to innovate. Being in a  Value Network seems to amount to signing up for collective-blinker-wearing in the name of short- to mid-term success. Not only do the members of the network not want disruptive innovation, they also don’t seem to see it  coming; and when they do, they are often powerless to do anything about it because of the need to maintain short- to mid-term revenue.  

"The Innovators Dilemma" was first published in 1997, making it a relatively old book, and its position as the state of the art in Innovation Management has been usurped by "The Lean Startup" and its ilk, but most of the ideas contained within are still highly relevant. I am routinely amazed, and not in a good way, by how few professionals, in either explicit or tacit Innovation Management roles, have read this book, or are even aware of the principles it expounds, despite how accurate they have been shown to be. This book should be required reading for everyone involved in Innovation Management, which is just about everyone involved in commercial software development in my opinion. 

I have found that the notion of inertia-causing value networks is applicable to more than just organizations; it is also a very useful metaphor for planning one’s career as a software professional. As a software engineer one develops one’s own Knowledge Value Network simply by using languages, tools, platforms and technologies on a day-to-day basis.

Microsoft SharePoint is a very good case in point; a SharePoint Developer is not just a .Net Developer with some SharePoint knowledge; SharePoint has enough breadth and depth that it is a domain in its own right, and becoming expert in all its APIs, services, models, schemas, languages (CAML anyone?), and other technical concepts, is at least a fulltime job and maybe two.  One could have spent the last decade just doing SharePoint development, making lots of money and getting ahead in the world of software development, and never have heard of Hadoop, Node.js, Scala, or even other Microsoft technologies like ASP.NET MVC.

So what happens when SharePoint's star starts to burn through the heavier elements? When does one become aware of the fact that one’s future success is no longer assured by being a SharePoint expert? Probably too late in one’s career to do anything about it, spending one’s technical twilight years doing legacy SharePoint application support. Old software does not die; it simply becomes covered under different clauses in the support policy.

I am not making any assertions about the specific future of SharePoint; this is most definitely a product with long legs, but it is a truism that those legs will get shorter at some point in the not-too-distant future and then disappear entirely.

I have also seen this phenomenon time and time again in large-scale enterprise development; a team of talented young engineers are hired to develop, enhance and maintain a complex line of business application. After a few years, and dozens of releases, the software has become so complex that it is a knowledge domain in its own right. New developers on the team take six months to get up to speed, despite being experts in the tools and technologies with and on which the application is built. Knowledge of the product’s minutia becomes the developers’ personal value proposition and source of job security. They live and breath the technical intricacies of the application  every day, to the exclusion of emergent  and alternative technologies that are simply not part of the technology-mix used by the application. The line of business application, its tools and technologies become their personal  Knowledge Value Network. And like organizational value networks it is closed and self-reinforcing. These same developers often struggle to find financially and intellectually rewarding work when they literally emerge from under the rock.

Developers who do not recognize this will find themselves technically irrelevant at a critical point in their careers. One hopes that they have decided to follow the management track before this point, otherwise they are going to find themselves grey-haired individual-contributors with technical skills only relevant for supporting legacy applications. That it not to say that this will be a poor source of income, but no amount of money is worth the soul-death that becoming a professional manager means for many software engineers.

So how does one avoid this fate? Set aside time each week to research emergent and competitive technologies, products, platforms, tools, methodologies, languages, architectures, designs, metaphors, patterns, anti-patterns, philosophies and domains. And don't just be reading edge; if you can actually play with these technologies, do so, even if you can't leverage that work in your day-job. Not only will it give you real job security and maintain your sanity, it will also make you a better software engineer.

A personal example was learning Dynamic and then Functional Programming. A few years ago I decided to learn Python and then F#, despite the fact that my day job at the time required neither. Learning these languages has made me a significantly better C# programmer, though I still can’t tell you exactly why a space suit is like a monad; or is that the other way round? No matter. I also find that I am now able to solve programming problems far more elegantly and simply that I would have before learning these languages.

My personal practice is to set aside a few hours per week and let my curiosity drive me. My most recent endeavour has been to work through  Daniel Shiffman’s excellent new book, “The Nature of Code”, which is about simulating real-world phenomena in code. The examples are all in Processing and I have had a lot of fun creating random animated 3D surfaces using Perlin Noise. And no, it doesn’t have anything to do with SharePoint; but that is the point.

Stay curious. Disrupt thyself.

Wednesday, November 21, 2012

Thursday, October 4, 2012

Finally Processing Snowflakes

I finally managed to get my simple Processing.js snowflake simulator working. The bug that was causing it to fail was fixed in the 1.4.1 release.The performance is poor and it eats up a bunch of CPU cycles, but here it is…

Monday, October 1, 2012

Emergent Complexity Management

In a previous post I asserted that accurate estimation of even moderately-complex software projects is impossible. This stems from our inability to predict the probability and impact of emergent complexity, due to our cognitive biases and the current human-centric method of generating software. Our inability to accurately predict and estimate is evident in almost every domain; the Great Recession being a perfect and painful example. Despite the fact that every decade or so a Black Swan Event occurs in the financial markets, Black-Scholes is still being used to predict the future price of options, and the markets still seem to be in denial about our inability to make accurate long-term predictions (even when we use sophisticated mathematical models). The difference between the markets and your average medium- to high-complexity software project is that in software projects you are almost guaranteed to have one, and typically multiple, Black Swan Events!

Despite the current significant failure rate of software projects, Professional Project Managers still believe that they can mitigate the risks associated with emergent complexity by building in a “buffer” or “contingency”. The data show unequivocally that this is, at best, pure self-delusion, and at worst criminal negligence; no matter how large this buffer is made, there is no way to ascertain upfront whether or not it is big enough to cover overruns that may arise due to emergent complexity. And obviously making it too large will cause project failure for purely financial reasons.

It is fairly obvious that the emergence of Agile Practices is an attempt to address the obvious shortcomings in software estimation specifically, and in our software development processes in general. Recursive refactoring of Epics into relatively-low-complexity User Stories and Tasks, and using techniques like Planning Poker, do make vaguely-accurate estimation possible, but I suspect that the fact that many Agile Gurus suggest abandoning estimation (in hours as opposed to an abstraction like Story Points) entirely, is an indication that this does not substantially improve the overall temporal predictability of Software projects.     

Selling Agile Practices to developers is like selling Gatorade to people who have been lost in the desert for days without a canteen. They get it instantly. The problem however is selling Agile to other “stakeholders”; whether they are internal Management, Sales, Product Management or Marketing teams, or a client for whom one is building software. This is where the real Agile battle is being fought.

The status quo is our, i.e. software engineers, own fault; for nearly half a century now we have been telling our customers that we know how long it is going to take, and implying that we have high confidence in our estimates. One has to wonder why, after we have been shown to be so utterly untrustworthy in this regard, that our customers and managers have continued to let us near computer keyboards.

So why do customers and managers keep buying the snake oil? Obviously they believe that they are limiting their exposure to the risks associated with emergent complexity (and sometimes incompetence unfortunately),  by passing that risk on to the development team or company responsible for the software development. Sure, if they make the penalty clauses punitive enough, and the developers agree to those clauses, it may partially mitigate the risks, but it does not address the fact that, in this day and age of rapidly-and-always changing competitive landscapes, a missed software release may very well mean the demise of an organization. So instead of mitigating the risk, it is simply being hidden and made binary.

That said, and despite what the Gurus say, it is not practical to simply leave software project costs and schedules open-ended. We need a way to roughly size software projects, but we also need a way to manage emergent complexity and minimize it’s negative impact  to the schedule and ultimately to the value of the software to the user.

I am obviously one of a multitude of software professionals grappling with this, and I have yet to find a solution that is totally satisfying (and sellable). I suspect that the answer includes motivating stakeholders to abandon the illusion of certainty for real transparency and a daily opportunity to inflect the engineering process; and a more scientific approach to product management along the lines of The Lean Startup.

Monday, August 13, 2012

Bret Victor - Inventing on Principle

Every time I watch my two-year-old son driving his iPad I marvel at the elegance, simplicity and intuitiveness of the iOS user interface, and I wonder about those mostly-unknown geniuses who designed it. This weekend I watched a presentation by Bret Victor, who is one of those geniuses. This is an excellent talk on many levels.

 

Bret Victor - Inventing on Principle from CUSEC on Vimeo.

 

And when you are done take a look at Bret’s site for other amazing stuff. Just don’t use Internet Explorer; I suspect Bret does not like IE for some reason.

Tuesday, July 24, 2012

Rule 34

A few years ago I had the opportunity to go and hear one of my literary heroes, William Gibson, give a talk. After the talk I asked him which author he thought was his literary heir apparent. He said Charles Stross without hesitation. The next day I picked up a copy of Halting State, and thus began my love affair with the writings of Mr. Stross.

I recently finished Rule 34, which was published in 2011 and is the second in the Halting State Series. I loved this book. Not only is Stross’ a brilliant storyteller, but he also has his pulse on the current bleeding edge of cultural and technological evolution, and an amazing vision of where that evolution is going to take our species in the near and far future.

I don’t plan to give away the plot, but there are a couple of themes in this book that I will hint at because I have spent a lot of time cogitating about them over the last year.

 

The Rate of Technology Adoption by Law Enforcement

In Rule 34 Stross imagines a near-future where the criminal justice system is driven by sophisticated machine learning and augmented reality software systems. Stross’ technical vision cannot be faulted; all of the applications of technology that he describes are not only entirely feasible, but are mostly already commercially available today in some form or another. However, having worked in the Justice and Public safety domain for the last year, I have to question the rate of cultural evolution in law enforcement that this vision implies.

Despite what we see on television, in shows like CSI, in reality law enforcement agencies are glacially slow at adopting even relatively new technologies. In the United States and Canada today, other than the large federal and state/provincial agencies, most state/provincial and local agencies and police departments do not have fully integrated systems, let alone systems that leverage emerging technologies. The US Federal Government has a number of initiatives to address this, including the GRA and NIEM standards, but adoption of these has been very slow indeed.

In Rule 34 Stross hints at a cataclysmic cyber-attack that motivates a significant acceleration in technology adoption by law enforcement agencies across the planet. The 9/11 attacks failed to motivate this acceleration, so I am not convinced that a cyber-attack would, even if it went so far as to take out national power, telecommunication, and transportation infrastructure.

The current Holy Grail of law enforcement is Predictive Policing (think Minority Report sans the half-naked psychics in the wading pool). Despite the fact that the data storage and real-time analysis technologies needed to implement  predictive policing solutions are available today in off-the-shell products, e.g. Hadoop, Mahout, MarkLogic Server, and even products that they are probably already using, like Microsoft SQL Server; most agencies are now only beginning to look into how they might integrate their existing systems, automate currently paper-based processes, analyse their data using 40-year-old OLAP technologies, and migrate existing 25-year-old mainframe applications. I imagine that it is going to be a while before we see predictive, machine learning-based systems going into production in these agencies, let alone systems that leverage augmented reality user interfaces.

 

Free Will and Criminal Justice

One of the characters in Rule 34 asks a very interesting question; if humans do not have free will then how relevant or useful is a system of justice based on increasingly more complex laws and statutes, and the enforcement of those laws and statutes primarily through the threat of punishment? Recent research in cognitive neuroscience, using functional brain imaging techniques, has revealed that we start acting on a decision, i.e. muscle-actuating nerve impulses are sent, before we consciously become aware that we have made that decision. This implies that most, if not all, of the decision making process happens in the unconscious mind, and that once the decision has been made, the conscious mind is informed of the decision as a courtesy.

This seems to imply that Martin Luther was right[ish] all along; man does not have free will after all. If this is the case then how can we expect that a threat of punishment aimed at the conscious mind will motivate obeyance? And more importantly, how can we justify punishing those that transgress? Are they not merely victims of their own pathology? I don’t claim to have answers to these questions, but I do know that the findings of this research brings into question many of our most deeply held assumptions about our own nature, and the structures that we have put in place to govern that nature at scale.

Update (August 26th 2012): If you are at all interested in this latter topic, I highly recommend that you read “Free Will” by Sam Harris. It represents a thorough analysis of the current state of the demise of our illusions of Free Will. 

There is a lot more to Charles Stross’ Rule 34 than these two themes; it is a feast for the [conscious] mind, so I highly recommend that you pick-up or download a copy and read it.

Wednesday, July 4, 2012

Pure Genius!

Here is a bit of pure genius that I have to share:

http://thecodelesscode.com/contents

This should be mandatory reading for all software developers and architects.

Enjoy!

Friday, January 20, 2012

The Emperor Will Never Have Clothes!

Half a decade ago I did some fairly exhaustive research into software time and cost estimation techniques for Microsoft Services. I evaluated the effectiveness of both formal and informal techniques used within Microsoft, across the software development industry, and even in other industries. I looked at everything from Estimation by Analogy to COCOMO II.

After months of research I discovered that the most commonly used estimation techniques were Estimation by Analogy and Wideband Delphi, often done in combination. However, most of the teams using these techniques were doing so informally, and had never heard of either of these terms.

I also discovered that there were no software estimation techniques that consistently produced accurate estimates, other than for small projects, even those sophisticated parametric estimation techniques like COCOMO II. 

At the time I was doing the research the failure rate for IT projects was around 70%, and many of those failures were due to catastrophic budget and schedule overruns. It quickly became evident to me that the way the industry had historically thought about estimating software projects was deeply broken; The Emperor had no clothes! 

My final recommendation to Microsoft Services was that they adopt Estimation by Analogy and Wideband Delphi, and formally train their staff on the use and limitations of these techniques. My other recommendation was that they also invest in maturing their risk management processes, given the historical inaccuracy of software estimation in general. It was on this latter area that I focused until I left Microsoft.

Our obviously apparent inability to do accurate estimation bugged me for years after I completed this research. This itch that I could not seem to scratch motivated me to do some reading about predication in general, and in particular Quantitative Analysis and its use of Stochastic models; one would imagine that if any industry had worked out how to predict the future it would be the Financial Industry given their huge monetary incentive to do so. Obviously recent events have shown that not to be the case, but this was before all that nastiness.

And then I read “The Black Swan: The Impact of the Highly Improbable” by Nassim Nicholas Taleb, and it all made sense. It is not that we have yet to find an accurate technique to do software estimation; it is that accurate estimation of large, complex software development projects is simply NOT POSSIBLE! (given our current understanding of the laws of physics anyway). Not only does the Emperor not have clothes, but he will remain naked for the foreseeable future.

I suspect that the move to  Agile software development practices is a natural response to our inability to do accurate estimation, but I am continually amazed at how many people in the industry and how many of its customers are still in denial about this, what should now be self-evident, fact. I still encounter many projects where development teams are held to early estimates, which are typically done by non-technical project managers. And this is particularly true on fixed-cost\budget projects.

Surely it is time for the entire industry and its customers to acknowledge that this whole approach is simply broken? The illusion is that fixed-cost\budget projects shift ownership of the risk from the customer to the organization who is developing the software; but given the percentage of IT projects that still fail today, this is obviously just that: an illusion; and a pernicious one at that.

We need to start from the premise that accurate software estimation is currently, and probably forever, impossible, and go from there. Agile practices have made a good start but we obviously need to do more. And most importantly we need to educate our customers!

Here is an interview with Nassim Nicholas Taleb wherein he talks specifically about our inability to estimate IT projects: