Saturday, February 12, 2022

The Principles that I Live and Work by

In the process of composing a new engineering team, I thought it might be useful for members of the team to hear the principles that I live and work by. As I created the list I was mindful to only write those things that I can and have demonstrated, and avoided principles that I aspire to but have not been able to consistently demonstrate.

 

So here they are:

  • I take risks.
  • I think hard.
  • I am humble.
  • I am curious.
  • I am creative.
  • I am courageous.
  • I do not sweat the small stuff.
  • I take pride in the work that I do.
  • I celebrate the victories of others.
  • I live on the ground, not on the map.
  • I make other people better at their jobs.
  • I am honest, with myself and with others.
  • I challenge my own most strongly held opinions.
  • I am mindful of my own strengths and weakness.
  • I am continuously improving my tactics and craft.
  • I learn from the successes and failures of others.
  • I am self-reflective and act on that self-reflection.
  • I fail, and I learn from both my successes and failures.
  • I am transparent with my team, leadership, and partners.
  • I am authentic, and I give others the space to also be authentic.
  • I always looks for ways that I and my team can do things better.
  • I consider how the products that I build make people’s lives better.
  • I take responsibility for the consequences of my words and actions.
  • I tell the people I work with how I am feeling when it might impact them.
  • I treat everyone with respect, even when I vehemently disagree with them.
  • I support my opinions with data, or a model that can be communicated to others.
  • I consider the consequences of my words and actions before I speak or take them.
  • I respect and adhere to the values and policies of the organizations of which I am a part.
  • I have cognitive empathy for everyone and build mental models of what it is like to be them.
  • I say when I feel confident that I or my team can or cannot accomplish a task we have been given.
  • I make space for everyone to have their voice heard, no matter what their preferred mechanism for communication.
  • I communicate the confidence that I have in my estimates and am prepared to support that confidence with data or a model.

 

I am also a human, and I have bad days when I do not embody all of these principles. And I have compassion for myself, knowing that I am as subject to the limitations of my humanity as the rest of my species. It takes work to not fall back to pathological instincts, biases, heuristics and conditioned behaviours. It takes work to be your best self. I do and will do that work. If I have one principle that subsumes all these others it is that if I want to change the world, I first have to change myself. As above, so below.

Friday, June 19, 2020

Augmented Emotional Intelligence

I am a Maker.

 
But what does it mean to be a Maker, and how does being a Maker impact one's life?

 
My path to becoming a Maker involved learning electronics, simple mechanical engineering, machine learning, 3D printing, robotics, machining and milling, vacuum molding, parametric modeling and simulation, PCB design, sewing, laser cutting, carpentry, and also first aid unfortunately. But it was not becoming proficient in any one of these skills that caused me to think of myself as a Maker. Somewhere on the path of learning all of these skills I realized that I had started to look at the world in a different way; I had begun to see the world as a continuous series of opportunities to create solutions for problems and challenges in my own and other people's lives, and that the only thing preventing me from solving any given problem was my own imagination. I also realized that this wasn't only constrained to problems that had solutions in the domains of the aforementioned skills, but rather, all the problems and challenges I faced in all aspects of my life. It is this attitude of being willing to attempt to solve any problem, with all the skills that one can bring to bear, or with new skills that you may need to learn, and seeing every challenge as on opportunity to create a solution, that makes one a Maker.


This Maker ethos has inspired me to make many gadgets, toys, tools, programs and processes; from the ultimate 3D-printed back-scratcher, to a machine learning-based pan, tilt and pedestal, face-tracking gimbal for a web camera. However, the project I am most proud of, and invested in, is an Augmented Emotional Intelligence wearable for children and adults with Autism Spectrum Disorder (ASD).


ASD has impacted the lives of several people who I  care dearly about, and this has motivated me to become very involved in Neurodiversity advocacy. The challenges that those people, and all neuro-atypical people, face daily, has inspired me to apply my Making mindset to reducing the stress and anxiety of Autistic people. One of the challenges that many people "on the Spectrum" face is that they are not able to automatically and instinctively read the emotions of the people around them. There is a misconception that Autistic people have low empathy. Autistic people in fact generally have high emotional empathy, which means that they are very sensitive to the emotions of others. Unfortunately, they also tend to have low cognitive empathy, which means that they are not able to accurately identify emotions, nor construct accurate causal narratives for why someone might be feeling a specific emotion. This lack of ability to correctly interpret the emotional states of others puts them at higher risk than neurotypical people; particularly when those emotions are frustration, anger and disgust.


I decided to attempt to build a wearable device that would be able to recognize the emotions of the people around the wearer, and give the user discreet feedback so that they were able to recognize and appropriately respond to those emotions. I coined the term "Augmented Emotional Intelligence" to describe the essential function of the device. I chose to use a haptic device to give the user feedback, and employ Affective Haptics and Emotion Elicitation to indicate which emotions were being detected, and the intensity of those emotions. The device would detect emotions that might cause the wearer stress or anxiety, or potentially expose them to harm from others, primarily frustration, anger and disgust. Not only would the device give the wearer feedback but could also automatically call for help from a caregiver, in the face of intense negative emotions, with a location, and a summary of the detected emotions. The device could also include sensors to detect the wearer's emotions, though this was not part of the scope of my initial project.

 
Machine learning technology has come a long way in a very short time, and Deep Neural Networks as they have been applied to Computer Vision, have given us the ability to computationally efficiently recognize and classify Faces in images and video. One of the sub-domains that this is being applied to is Human Emotion Detection and Recognition. Neural Networks that use Facial Coding features, which break the face into several groups of muscles, can relatively accurately recognize primary emotions, though they are not yet able to efficiently measure more subtle emotions or interpret micro-expressions. Voice and speech Recognition, which have also hugely benefited from the invention of Deep Neural Networks, can also be used to recognize emotions, either on their own, or combined with video data, but these significantly increase the overall computational costs of the models or systems.


My initial, and unfortunately naïve goal was to use software-based machine learning on a microcomputer to detect micro-expressions captured from an attached camera. My initial attempt used a Raspberry Pi 3B, OpenMV, and Python. I very quickly discovered that the Pi was woefully computationally inadequate for running even the simplest ML models for primary emotion detection, let alone micro-expression detection, which have durations of less than 500ms. I resorted to using the embedded camera and microcomputer for no more than facial detection and then once detected, sending the image of the face to Microsoft’s Cognitive Services face/emotion detection service. Though the results, even from the cloud service, were not adequate for a real product, they were good enough to demonstrate what a real product might look like.


In the evolution of this project I have developed versions of the device based on multiple versions of the Kendryte K210 based Sipeed MAIX, which includes a Neural Network Accelerator; multiple versions of NVIDIA’s Jetson platform, including the Jetson TX1, TX2 and Nano; and the Google Coral TPU. I have also tried several pre-trained emotion detection models, and models that I have trained myself using existing datasets. This project has also motivated me to collaborate with researchers at Microsoft Research who are working in similar areas, primarily in Accessibility. Unfortunately, the technology is not yet at the point that a device with the required specifications could be manufactured. Despite all the software advances that have been made in Machine Learning, and the hardware advances that have brought ML to embedded devices, the detection of micro-expressions on a System- or Module-on-a-Chip is still not possible, and most sophisticated emotion detection models will use 100% of the CPU, GPU and memory of a modern, high-end engineering workstation.


But the software and hardware continue to improve at an ever-accelerating rate. Models are now being trained on truly massive data sets, and new techniques are being discovered that dramatically improve detection, recognition and computational efficiency in general. The Internet of Things, Edge AI, robotics, and autonomous vehicle research and development are driving substantial hardware innovation in embedded AI.


At some point in the not-too-distant future my vision of an Augmented Emotional Intelligence wearable that will meet the performance, reliability, predictability and durability requirements for a device that will protect vulnerable neurodiverse adults and children, will be realized. Until that day I will continue to evolve my prototypes and apply my Maker mindset to the problem.

 
Even if this product is never brought to market, it will continue to give me a platform to talk about Neurodiversity and be an advocate and ally for people with ASD. And that is a cause worth pursuing in its own right.

Friday, May 31, 2019

On Software Architecture and Creativity

I recently became embroiled in a discussion about what exactly Software Architecture, Software Design and Software Engineering are, and if and how they are different. Despite how often these terms are used in the course of creating software, there seem to be as many definitions as there are people who use them. I have had reasonably durable definitions of these concepts for decades, so I thought I might discuss them in a post.

Firstly, I generally think that in the context of Software Development the terms Architecture and Design, as both verbs and nouns, are interchangeable, with the exception of  User Experience Design and  Interface Design which I consider different disciplines and domains entirely.

A useful generalized definition of Architecture is the process of envisioning and planning the implementation of a coherent system or systems to meet the static and dynamic constraints imparted by the broadest possible contexts in which those systems will operate over their lifetimes. Not all systems have direct users per se, but when they do Architecture should always prioritize the constraints imparted by its human users.

A foundational dichotomy in traditional Architecture seems to be the relationship between Form and Function, and which of these should be the prime mover. It would seem that the current zeitgeist has Function in the clear lead, and that Form should follow (though you might disagree after looking at the utterly magnificent buildings designed by Zaha Hadid or Frank Gehry).

My opinion is that Form is in fact just Function with a human face.      

In meatspace Architecture the human emotional, social and cultural contexts are fundamental to creating buildings that are optimized for their human occupants. Novelty, for its own sake, seems to be something that humans are neurologically hard-wired to receive neurochemical reward for. It does not require a massive amount of imagination to see how, from an evolutionary perspective, desiring new and different might be highly adaptive, particularly when balanced against a similar hard-wiring to need things to remain the same (though they might not be embodied equally in the same individual).

Humans are also significantly emotionally impacted by certain forms. Whether this is culturally informed, or by Genetic or Evolutionary Aesthetics, the apprehension of structural, non-structural, and environmental elements of buildings can and do have very significant psychological impact on the humans who live or work within or around them. For example, the unfortunate emotional impact that Le Corbusier’s architecture had on some of the people who had to live in his buildings is well documented, and one cannot help but be awed by Antoni Gaudí's Sagrada Familia.

I would assert that Form can be thought of as the sum of several Functional contexts related to Human psychology.

So how does this relate to Software Architecture? Where does Form, artistry and creativity fit in to the design of software systems?

I think it is safe to say that all human endeavors involve some component of creativity. No matter how dry and formal a domain, humans are continually finding new and often “better” ways to do things. And Software Architecture is no exception. Great software architecture does most definitely require and benefit from creativity, though I would assert that unlike meatspace Architecture, creativity for the sake of novelty is a very bad idea in most cases (that I have seen anyway). Recall that I am not including User Interface and Experience in this; I am referring to the design of software systems. And that is also not to say that a computer system, described in diagrams, algorithms or code, cannot be things of stunning beauty. But that beauty is a function of simplicity,  rather than novelty in my opinion.

Knowing when and where to be creative, and where to explicitly not be, is the mark of a great Software Architect. All too and painfully often I have observed Software Architects add novelty to a system that ultimately makes that system less stable, usable, maintainable, scalable, perfomant, future-proof and cost effective. Software Architecture is a sub-discipline of Software Engineering after all. When faced with a problem and a set of constraints on the solution, an engineer’s first action must be to see if there exists a canonical solution that holds over the same constraints. If there is, then there is no discussion; the most efficient and cost-effective solution is to use what already exists. Only in the case that the problem and its constraints are truly novel is creativity required. And that creative process must be heavily informed by existing bodies of work that relate to solutions to similar problems.

I like to think that I am an artist, so I fully acknowledge the value of and need for creativity, but we need to endeavor to deploy it judiciously in the design of software systems.  We should recognize and reward Software Architects and Engineers for how much code lands up NOT being written!

Saturday, October 27, 2018

Fac, ergo sum

I make, therefore I am.

A solitary human out on the African Savanna is mostly some other animal’s lunch; we aren’t particularly strong or fast. But we are crafty, and when we get together in groups we become formidable. So it is not surprising that we are social animals, and that we receive significant neurochemical rewards for social behavior.

Dunbar’s Number, 150, has now been found just about everywhere we look for the size of human communities, be they in Meatspace or Cyberspace. I suspect that the minimum number of human individuals required to make a functional, self-sustaining, durable collective is not surprisingly Dunbar’s Number. I would go so far as to suggest that we should redefine One Human as being made up of 150 individuals; any less or more than this is not sustainable. If this hypothesis has any merit we should find significant evidence for it, in the neurological rewards systems in the human brain, and the number of individuals we are hardwired to keep track of. We most certainly find it in data associated with the size of human communities.

Humans are also tool makers. And probably the greatest tool they have made is teaching other humans how to make tools. Civilization could be thought of as nothing more than a evolutionary framework for propagating the knowledge of how to make specific tools through spacetime. Given the massive evolutionary advantage that this ability has given us, one would imagine that we have evolved specific neurological mechanisms that would incentivize us to make tools and to teach others how to do so. It is highly likely that Making (which inherently includes the didactic component), is a natural and powerful antidepressant, which gives Makers a durable sense of purpose, meaning and accomplishment.

Given how beneficial Making is to our species, this also introduces the possibility that the primary purpose  for human clumping, other than staying off the menu, is to maximize the opportunity for making tools and directly passing that knowledge on to others, while minimizing the caloric cost (cognition is SUPER expensive after all) of keeping track of the other constituents of one’s Human.

Given their utility, are tools not the most durable of memes?

Regardless of the validity of the above hypothesis, nobody can argue that Making, and then teaching others how to Make, is one of the most consistent sources of satisfaction and meaning. Certainly I have found being a Maker, along with investing in making other peoples’ lives better, to be the best anti-Nihilism I have come across.

Monday, October 8, 2018

Scoop of Death Junior – Making a Sumobot

I had not planned on entering this year’s Microsoft Vancouver annual Sumobots contest , and was most definitely not expecting to win it. I entered in 2016 as part of a team, and again in 2017 with a robot I designed and built myself, aptly named “Scoop of Death” ( SoD ). I spent about a month working on SoD, most of which was spent modeling a new body for the Elegoo kit, and the scoop that gave the bot its name. All the modeling was done in Autodesk’s Fusion 360, which I was just learning to use at the time. I also spent some of that time writing code to make the bot capable of competing autonomously, and even included a Pixy camera for hardware-accelerated computer vision. On the day however, given how many of the entrants were going to be remote controlled, I decided to go with a simple RC solution via the Bluetooth component and accompanying iOS app supplied in the Elegoo kit. Though the scoop on my bot got me all the way to the finals, I was not able to vanquish the six-servo metal beast that I faced, and was easily pushed out of the ring. I was totally happy with second place; I am generally not competitive, and participating in the event had given me an excellent opportunity to up my Maker game.

And in the year since that competition I have continued to improve my Maker Fu. I have started, and in most cases, completed, a handful of rather ambitious  projects (for me anyway) this year. A couple of the most ambitious ones are still underway, which is why I had initially decided not to compete in this year’s Sumobot contest. However, a few weeks before the event, Stacy Mulcahy, who runs the Microsoft Vancouver Garage and the Sumobot event, asked if I would participate. If I were simply able to re-enter SoD in the competition, with only minor but devastating upgrades, it would have been a no-brainer. Alas this year they changed the restrictions to a max size of 6”x6”x"6” and no more than 6.6 lbs., and SoD was somewhat longer than 6”. I decided to participate after all and picked up one of the Pololu Zumo kits that the Garage was providing to entrants. I also decided to limit myself to only two days to design and assemble my bot (3D printing not included because, well, it takes FOREVER). Because of the time limit that I had set for myself I also planned to reuse my killer scoop design from SoD and borrow  code very liberally from the examples that come with the Zumo kit.

The Zumo Shield uses a lot of GPIO pins by default for the integrated accelerometer, gyroscope, magnetometer, motor controller, buttons, buzzer, battery meter, and optional line follower. The kit we got came with a basic Arduino Uno, so almost every pin was already used for something. The first additional part I built for my bot was a hard-wired analog joystick. I used a GHI Joystick Module that the Garage donated to me, and modeled and printed a comfy housing for it.

20181009_160706673_iOS

Unfortunately the joystick pushed me beyond the pin capacity of the Uno, since it required two additional analog pins and one digital one (for an optional button). I also planned to include an ultrasonic rangefinder and LCD display, so I decided to replace the Uno with a Mega board that I had lying around, since it has a lot more of everything. The last addition was a small breadboard for the additional parts (though I am not afraid of soldering I try to avoid it if I can, to maximize the reuse of parts, without having to resort to de-soldering or snipping wires). Before testing any of the electronics I happily jumped right into Fusion 360 to model my bot.

In retrospect I wish I had respected the wisdom of RTFM, or at least tested the electronics before starting the modeling. I would have discovered very promptly that the Zumo Shield does not work with the Mega because of differences in how the GPIO pins are mapped to the capabilities of the microprocessor. So I naively took the scoop from SoD, scaled it down to size, added an integrated mounting plate for the Mega, the Zumo Shield (with its attached motors) and all the other bits and pieces I wanted to include.

I spent half a day modeling the part, printed it overnight, and then completed the assembly the next day. I sliced the part into four sections before printing to avoid having to print support. I have done a fair amount of 3D printing now and I have found that a part that is printed in sections and then glued with model glue makes for a very structurally sound finished product, and it is easier to fix mistakes because it only requires the reprint of a single slice. It also helps if you want to insert additional weights or structural support into the finished part. In this case I only had to reprint one slice and then I inserted some weights (ball bearings) into the bottom slice of the scoop, glued it all together, and sanded it to a lustrously smooth finish.

slices

A cautionary aside is necessary at this point. I did not wear a respirator while sanding the part, which was rather irresponsible of me. PLA is probably slightly toxic, and you don’t really want to breath it into your lungs, but model glue however is nasty stuff and you most definitely do not want to breath that into your lungs mixed in with the PLA dust. After a few hours of sanding I definitely felt it in my chest. Don’t be like me – always wear a respirator when you sand – period!

Despite the possibly negative impact to my health, the part turned out beautifully, and everything fit almost perfectly. There were a few post-print modifications required, but thankfully PLA can be reshaped within reason with the aid of a heat gun. I then assembled the entire bot, along with all the electronics and uploaded a version of the Zumo RC example modified to work with the joystick. It was at this point that I was reminded of whose mother Assumption is. The bot went into what I can only describe as a robotic fit. I finally R’d the FM and discovered the incompatibility between the Zumo Shield and the Mega. I had just assumed that because most of the components on the Zumo use I²C to communicate with the microcontroller, everything should just work. It turned out to most definitely not to be the case and I now had to salvage the situation with no time to model or print a new body.

20180924_015554787_iOS

Thankfully the Uno and Mega do share a similar shape and a few mounting holes. It didn’t take long to retrofit the mounting plate to fit the Uno, by drilling a couple of holes in the mounting plate, and heat fitting a couple of long standoff nuts into those holes. I found that this technique is the best if you want to create 3D-printed PLA parts that you want to be able to disassemble later. Drill a hole that is a few mm smaller than the nut, heat the nut with a heat gun and then gently push it into the hole. Once it cools it should remain very firmly in place and you can place metal screws in one or both ends and tighten them pretty tight without the nuts slipping or coming out.

SoDJnr-side

The return to the Uno meant that I had to make a choice; the bot had to either be autonomous or remote controlled. Not both, since there were simply not enough GPIO pins to support both scenarios. In the name of expedience, I abandoned the code for my loosely-coupled, message-based, interrupt-driven, finite state machine, and just took the original sketch I had created for the RV version of my bot. The result was a very zippy little remote controlled bot, which I had originally named “The Sixer”, for all of the new restrictions, but I decided that “Scoop of Death Junior” was a much better name.

Since I had already consumed most of the two days I had rationed myself to, I did nothing more to my bot until the day of the contest. I weighed Jnr that morning and it did not even weigh a full pound, despite my inclusion of weights in the scoop! It needed to pack on a few more pounds. So on the morning of the tournament I modeled a weight box for the back of the bot and set it to print. My fallback plan in case the print failed (because obviously that NEVER happens!) was to just cable tie bags of bearing balls to the front and back of the bot. The print finished with time to spare and I was able to firmly attach it to the body using the heated standoff nuts technique. I then filled the box with bearing balls, glued the box closed with model glue (mistake!) and headed off to the event, again without testing the bot (you may see a trend emerging here).

While waiting for the weigh-in I finally decided to test the bot. Everything worked perfectly… except I had put too many steel balls in the weight box, and significantly shifted the center of gravity of the bot up and to the rear. Now every time I drove it forward it would pull an awesome wheelie. This would not do! My primary weapon was my scoop, and a scoop is only good if it is under your opponent, not way up in the air! So I frantically ran back to my desk, strung some ball bearings on a cable tie and literally tied it on the front of the bot with hookup wire. It didn’t completely stop the wheelieing, but it did make it a little less extreme. In the end this wheelieing turned out to be something of a secret weapon.

SoDJnr-bottom

The bravely-autonomous bot I faced in the first round, though cute, suffered from technical difficulties, and I was able to easily dispatch it. My subsequent matches were a little more challenging but I was consistently able to maneuver around my opponents, some of whom were remote controlled and some autonomous, and summarily push them out of the ring. I suspect that my advantage in most of these bouts (other than a human operator) was the weight I had added, since most of my opponents were using the standard kit that had been provided. Though I lost a couple of matches, I won every round, and then found myself in the semi-finals facing my nemesis from 2017 and the then reigning champion. He had also had to bring a new bot this year to comply with the new limits.

It was at this point that I thought I should put fresh batteries in my bot. This time I did test it, and the new batteries had made the wheelieing much worse! The extra juice was giving the motors extra torque – the lightest touch on the joystick had the scoop at 45 degrees and the bot headed forward at nearly full throttle. Perhaps I could use this in some way to give me an edge in the last couple of rounds. I had nothing to lose; I was in it for the Making, not the winning. If I let my opponent come to me, and get up over the lip of my scoop, I should be able to simply give it full throttle and use the torque to lift him, which should make it easier to push him out.

SoDJnr-top

My opponent’s bot in the last two rounds (it was a double elimination tournament) was armed with 4x12v DC motors, while mine was running on the two “stock” 5v motors. In a straight head-to-head shoving match I didn’t stand a chance. In the end I suspect it all came down to three things – low latency, simplicity and build quality.

My decision to use a simple hard-wired controller, wired directly to two analog pins, meant that my bot was incredibly responsive. My opponent had chosen to go with an Xbox 360 controller, connected over RF to a receiver plugged into a USB shield, communicating with the microcontroller using a serial protocol. And those very powerful 12v motors were also a little on the slow side. My bot was significantly more agile and was able to move out of the way whenever we got into a head-to-head situation, and I was then able to come in from the side before he could turn, get the lip of my scoop under his treads, and then use the wheelie to lift and drive him out of the ring.

20181010_180612994_iOS

The other advantage was that my bot was very robustly constructed, and there just wasn’t a lot that could go wrong with it (other than human error – I lost one match because I forgot to turn my bot on). All the structural parts of my bot were 3D printed and either glued together with model glue or bolted in place. My opponent kept having technical difficulties, from his controller not working, to the treads coming off his bot. I suspect that if his bot had been just a little bit more robust, that I would not have stood a chance against those four 12v motors. In the end I suspect I won because I had to keep it simple to get it done in 2 days.

In the days since the contest I have continued to improve on the design of Jnr. I have added an additional weight box in the front and have reduced the size of the one in the back. Though it probably will not compete again I plan to print the parts and put them on the bot. I also plan to put all the models and code up so that others can improve on my design.

I continue to be incredibly grateful to Microsoft for the Garage, which has enabled all of this, and has given me the opportunity to learn so much.


20181005_003538602_iOS

Thursday, November 10, 2016

Agile Secret Sauce

There seems to be a lot of chatter on the Internet recently about the evils of Agile, and Scrum specifically. Some of these posts make a few valid points, but they all seem to have been written by people who have been burned because they were doing Fragile (see You might be doing Fragile), or who simply want to get a few extra page views on their blogs by posting content that is designed to provoke an emotional response. I won’t get into the specific flaws I see in their arguments, rather I will describe one single practice, that falls under the Agile banner, that can mean the difference between sustained success, and failure on any project; regardless of whether the people involved claim to be practicing Agile or any other methodology for that matter.

Despite the mind-blowing progress that has been made in software development technologies, tools, processes and practices in the last half century, more than half of the software development projects undertaken by organizations of all sizes still land up in some or other failure state, often related to poor execution rather than a misconceived vision. Though complex software development remains a relatively difficult undertaking, this does not justify or excuse the high rate of failure. Surely after creating software for over half a century, we would have worked out how to do it in such a way that most attempts were successful, at least in execution? Why have we not learned from our collective mistakes?

I assert that the software industry has already stumbled upon a simple practice that is almost a silver bullet for successful software project outcomes. Over my career, I have been on teams that have used BDEUF (Big Design and Estimation Up-Front), proto-agile Iterative methods, formal Agile methods, including Scrum, eXtreme Programming and Kanban; novel and ad-hoc processes, and some that have just practiced Seat-of-the-pants. And across all of the aforementioned, the one practice that seems to have an almost binary impact on successful execution, is The Retrospective (though it is not always named such).

The Retrospective represents the ultimate confluence of the warm and fuzzy of human psychology and the hard and practical of the Scientific Method. It creates a simple, though formal process, from which a team can learn from its own collective experience, continuously experiment with new practices (with managed risk), and gives every member of the team an equal opportunity to shape the process that they are following.

Effective human-centric process engineering is hard; human psychology is complex and volatile. A process that works perfectly well for one person or team may fail dismally when adopted by another, even if the formalisms of that process are held to with high fidelity. And given the ever-more-rapidly changing environment, a formal process that works perfectly today, will almost certainly not be optimal tomorrow.

And most teams who attempt to adopt formal process frameworks, either don’t implement all the practices and disciplines required to make those frameworks effective, and then let them mutate organically; or institute overly-cumbersome change control processes in an attempt to minimize the cost and productivity impact of changes. Processes can quickly go from being assets to liabilities; reducing organizational productivity and agility, rather than improving them.

What organizations and teams need is a continuous process improvement process as the meta-process for the organization; one which permits and encourages continuous optimization in the face of the ever-changing environment. Managing the continuous improvement of processes needs to be baked into the organizational culture, and it must be owned by everyone from the intern to the CEO; everyone needs to be able to suggest changes to the process, and those changes should be considered if the proponent can formalize a credible hypothesis of how the change will make a net improvement in the process. The organization or team also needs to be able to measure the effectiveness, or lack thereof, of all tactics that it adopts, or experiments it attempts.

The Scientific Method is inherently empirical; for any hypothesis, there needs to be defined:

  • One or more tests that will prove the hypothesis
  • One or more tests that will disprove the hypothesis
  • A description of the data that need to be collected over which the tests will be evaluated
  • A mechanism to collect those data
  • A description of how other potentially un-related environmental factors might affect those data

There is a misconception that the Scientific Method is complex. It’s not; it can be simply reduced to cycles of Reflect, Execute, Measure, and Inflect. The Retrospective is the primary mechanism for making the Scientific Method the basis for a continuous process improvement process, while also providing the heartbeat of the process.

I will save a detailed description for how I run Retrospectives for a future post, but the rough structure is as follows: every member of the team, in no specific order, gets an opportunity to comment on practices that should be continued, practices that should be stopped or modified, and new practices that should be adopted. All comments, though particularly those related to abandoning and adopting a practice, must be supported by a description of the expected effect that the proposed practice will have on the key metrics that the team uses to measure their performance, and how those effects will be measured. If the hypothesis is consistent and coherent, then the practice is adopted for one or more iterations or milestones (depending on how long it will be before the expected impact should be measurably and unambiguously identifiable). The proponent is then accountable for making sure that the required data are collected and analyzed, and for providing a conclusion in the Retrospective that coincides with the end of the experiment. If the experiment is successful then the practice is adopted as part of the formal process, until someone makes a supportable case for why it should be modified or abandoned in a Retrospective.

The person who is running the Retrospective also must ensure that argument, debate and defensiveness are kept to a minimum, that egos are managed appropriately, and that the Retrospective does not run over the allocated time. As is common with other Agile practices it is a best practice that the person running the retrospective is not a Boss. The importance of creating an atmosphere where participants feel empowered to speak their minds, are confident that their opinions are valued, and that all hypotheses will be evaluated solely on the merits of said hypotheses (rather than the seniority or the source), cannot be overemphasized.

Even if a team is not following frequentative process cadence, regular Retrospectives provide a vital heartbeat for the team and the process. The cost of adopting the Retrospective is low, and this simple practice will have a significant positive impact on the productivity and morale of any team, regardless of the process that they are using, or the organizational culture.

Tuesday, December 15, 2015

Windows Live Writer goes Open Source

My tool of choice for writing posts to this blog has always been Microsoft’s Windows Live Writer. Unfortunately it has gotten long in the tooth, and has not seen an update in many a year. Some time back Scott Hanselman announced that Microsoft was planning to make the code for Windows Live Writer available under an open source license. And that has finally come to pass; the code has now been made available under a MIT license, and a fork has been created called Open Live Writer.
 
Unfortunately I could not use it to write this post, because of changes to the Google authentication APIs, but like most open source projects, I expect someone will provide a fix for that in a couple of days.
 
Open Live Writer can be downloaded here.
 
Update (December 23rd, 2015): the Google authentication issue has been fixed and I can now edit this blog with Open Live Writer… which I just did. Gotta love open source.

Friday, November 27, 2015

On the Merits and Risks of Being a Frank

And by Frank I am not referring to a member of a Germanic tribe, or a person named Frank”, though I can imagine that bearing that moniker has its own merits and risks; I am referring of course to an individual who readily speaks their mind, is brutally honest, is forthright in the extreme, doesn’t beat around the bush, calls a spade a spade, chooses polemic over other forms of discourse, and so on and so forth. Let us call this kind a Frank.

And in the name of said frankness, I consider myself an instance of the aforementioned kind. As to why I am this way, I cannot rightly say; perhaps it is genetic, perhaps I learned it from my dearly departed father, perhaps I am on the spectrum, or perhaps I have a strong desire to reveal the underlying truth in all things, primarily to satisfy my own curiosity about life, the universe, and everything.

My life as a Frank has taught me something important that might help other Franks; regardless of the strength of one’s belief, the depth of one’s righteousness, the eloquence and sophistication of one’s argument, and the indomitable passion with which one delivers that argument; if one does not effectively engage the audience, either in hearing and responding to the subtleties of their points and counter-points, or in at least getting them to hear the subtleties of one’s own, then all of that passion, energy, and commitment are for naught.   

The mechanism one uses to elucidate the truth is just as important as the truth itself. If one is solely concerned with one’s own righteousness then perhaps it doesn’t matter, but if one is actually concerned with revealing or discovering the truth then it is not enough that one can identify the weaknesses or inaccuracies in an argument; one has to strive to successfully communicate one’s argument or position in such a way that it has the highest likelihood of being given any consideration by the listener (who may initially strongly hold an opinion quite opposite to one’s own).

In general, our most self-destructive behaviors, are occluded from us. If one doggedly seeks to elucidate the truth of a thing, but finds oneself driving one’s point no matter what the response from the listener or audience; or worse yet, one doesn’t particularly care what the listener’s response is, then one is probably not in it for any noble reason. One’s frankness in this case is probably motivated by some deep-seated sense of inadequacy, a need for attention, or worse.

Franks need to always keep in mind that most people will not initially take the passion, energy and commitment that a Frank demonstrates for interactively discovering the truth of a thing, as noble or good; they will just think that the Frank is an arrogant asshole. And despite the fact that the Frank’s intentions may indeed be noble, those people may still be correct in their assessment. And even those relatively self-aware Franks have bad days, when their forthrightness smothers their empathy and compassion, and becomes belligerence or just plain cruelty.

Wisdom, truth and fact need to find fertile ground in the minds of the listener, else they are no better than fallacies, lies, and superstitions. You as a Frank are entirely responsible for ensuring that it is not the demeanor of the messenger that causes the audience to raise stone walls around that ground.

The truth will set you free… and in most cases also scare the bejesus out of you.

Wednesday, November 25, 2015

Tactics will get you through times of no Strategy, better than Strategy will get you through times of no Tactics

Despite how often the words Strategy, Strategic, Tactics and Tactical are used in planning meetings, it would seem that many who use them don’t actually understand the difference between the terms, often using them interchangeably. The purpose of the post is actually to discuss the merits of Tactical Excellence, but before elaborating on that topic I think I should disambiguate the two aforementioned terms.

The best way to illustrate the meanings of these terms is with a military example, which is most apropos, since these terms both have martial origins.

A group of generals and their staff decide that during an upcoming offensive, taking and holding a particular hill, which is currently held by the enemy, will be pivotal to the success of the offensive. Since it is the highest ground for hundreds of miles in every direction, it gives whomever holds that ground the ability to visually monitor all activity in the area, of friends and foes alike. Additionally, it has the advantage of forcing the enemy to fight uphill, and is generally a more defensible position. During a major assault one would not want to leave an enemy at ones back holding such ground. The generals order a company of paratroopers to drop in behind enemy lines and take this hill ahead of the main offensive. Taking and holding this high ground, and denying it to the adversary, is an example of a strategy.

A strategy is typically a course-grained plan to achieve a large-scale goal. In the example above, the strategy itself does not describe how the hill should be taken, just that it should be taken and then held. Though the generals may have more specific instructions for their field commanders, typically, given the fluidity of combat, they are not overly prescriptive and leave the details up to those field commanders. It is tactics that will be used by the field commanders and their teams to deal with any circumstances that arise while taking the hill.

Tactics are repeatable patterns or behaviors that can be used to address specific challenges that might arise in achieving the strategy. Some military examples of circumstances requiring tactical responses are, encountering a machine gun nest or a fortified position, crossing a stream or ravine (while under fire perhaps), dealing with a wounded solider, dealing with a chemical weapons assault, surviving a firefight with a larger enemy force, or for the purposes of my example, dealing with a sniper.

During the attempt to take the hill, a stick (a small team of paratroopers) gets pinned down by a sniper, who has secreted himself in a grove of trees higher up the hill, making further progress up the hill all but impossible without the squad taking heavy casualties. The lieutenant commanding the stick does not have the time to strategize a response to this threat; and he doesn’t have to, because he and his team have trained in a number of tactics for just this situation.

Assuming that the squad has found suitable cover (always a good tactic) and released smoke to obscure their position and movements (another good tactic), the squad needs to locate the sniper’s position. Historically this was done by drawing the sniper’s fire by raising a helmet or similar decoy, and using muzzle flash, the sound of the shot and its delay, and reflections off the scope, to manually triangulate the position of the shooter. Though they may still need to draw fire, modern special forces (and perhaps even conventional forces) carry electronic devices that detect body heat, the heat of the bullet in flight, shockwaves, muzzle flash, sound delay, and a number of other measureable indicators to electronically locate the shooter, making finding a sniper much less tedious.

Once the shooter’s position has been identified the sniper can then be dealt with, by laying down enthusiastic machine gun or mortar fire on that position, deploying a weaponized mini-drone (such exotic ordinance must exist!), or calling in air or artillery support (though in our example that might not be such a good idea; an entire grove of trees on fire, or torn to large irregular shreds, might be as much an obstacle to forward progress as the sniper was).

The tactics described above are not explicitly part of the strategy, but the successful execution of those tactics are critical to the successful execution of the strategy. For tactics to be optimally effective, the team needs to be highly efficient at their execution, having practiced them over and over, until they are second nature. They also have to trust their leadership and team mates, that they will all do their parts in the execution of the tactic, since most tactics require more than a single person to execute.

Soldiers have a toolbox of offensive, defensive and supportive tactics that they take to war, or any other theater they operate in; natural disasters, police actions, peace keeping, etc. It is the depth and breadth of their tactical toolbox, and the soldiers’ expertise in executing these tactics, that distinguishes adequate soldiers from elite ones. There is a reason elite operators spend so much time training. Every tactic in their repertoire has to be practiced, tested and perfected, and new techniques continually added to address new threats or circumstances.

The need for such a tactical repertoire is not limited to the military though; and now to the actual topic of this post. In the example above, which is more important to the team on the ground’s ability to do their job - having a good strategy or having a wide and deep tactical repertoire? I would assert the latter. For operational teams, of any discipline, becoming and being Tactically Excellent is probably the most important strategy they could adopt. It is so important a strategy that I would consider it The Strategy to Rule Them All. Perhaps it even deserves Meta-strategy status.

Though I was in the military a long time ago, I have spent most of my career in Software Development. There are many highly-effective Software Development tactics that teams can bring to bear to maximize the likelihood of success. These include practices like code reviews, retrospectives, planning poker, daily stand-up meetings, unit testing, short iterations, paired-programming, refactoring etc. Also, developing strategies for executing various types of pivots are also vital, given the rates of change modern software engineers need to be anti-fragile to. If an engineering team becomes expert at these tactics, then it really doesn’t matter what strategy they are executing on, they will be effective. The weakest part of a strategy should never be the tactics used in the attempt to achieve it.

Strategies in general are rather ephemeral; they, by necessity have to change to deal with environmental or competitive changes, but tactics are more durable, and even timeless in some cases. As CEOs (“Generals of Commerce”) strive for strategic excellence, attempting to predict the course-grained direction of the markets, their competitors and their customers, operational teams should focus on attaining and maintaining tactical excellence. This may sometimes mean spending time foreseeing events that never come to pass, and time practicing techniques that are never needed, but these are not wasted efforts, since they improve the team’s foresight, improve the team’s ability to work together as a cohesive squad, and generally make the team more agile.

As I like to say, Tactics will get you through times of no Strategy, better than Strategy will get you through times of no Tactics.