I am a Maker.
But what does it mean to be a Maker, and how does being a Maker impact one's
life?
My path to becoming a Maker involved learning electronics, simple mechanical
engineering, machine learning, 3D printing, robotics, machining and milling,
vacuum molding, parametric modeling and simulation, PCB design, sewing,
laser cutting, carpentry, and also first aid unfortunately. But it was not
becoming proficient in any one of these skills that caused me to think of
myself as a Maker. Somewhere on the path of learning all of these skills I
realized that I had started to look at the world in a different way; I had
begun to see the world as a continuous series of opportunities to create
solutions for problems and challenges in my own and other people's lives,
and that the only thing preventing me from solving any given problem was my
own imagination. I also realized that this wasn't only constrained to
problems that had solutions in the domains of the aforementioned skills, but
rather, all the problems and challenges I faced in all aspects of my life.
It is this attitude of being willing to attempt to solve any problem, with
all the skills that one can bring to bear, or with new skills that you may
need to learn, and seeing every challenge as on opportunity to create a
solution, that makes one a Maker.
This Maker ethos has inspired me to make many gadgets, toys, tools, programs
and processes; from the ultimate 3D-printed back-scratcher, to a machine
learning-based pan, tilt and pedestal, face-tracking gimbal for a web
camera. However, the project I am most proud of, and invested in, is an
Augmented Emotional Intelligence wearable for children and adults with
Autism Spectrum Disorder (ASD).
ASD has impacted the lives of several people who I care dearly about,
and this has motivated me to become very involved in Neurodiversity
advocacy. The challenges that those people, and all neuro-atypical people,
face daily, has inspired me to apply my Making mindset to reducing the
stress and anxiety of Autistic people. One of the challenges that many
people "on the Spectrum" face is that they are not able to automatically and
instinctively read the emotions of the people around them. There is a
misconception that Autistic people have low empathy. Autistic people in fact
generally have high emotional empathy, which means that they are very
sensitive to the emotions of others. Unfortunately, they also tend to have
low cognitive empathy, which means that they are not able to accurately
identify emotions, nor construct accurate causal narratives for why someone
might be feeling a specific emotion. This lack of ability to correctly
interpret the emotional states of others puts them at higher risk than
neurotypical people; particularly when those emotions are frustration, anger
and disgust.
I decided to attempt to build a wearable device that would be able to
recognize the emotions of the people around the wearer, and give the user
discreet feedback so that they were able to recognize and appropriately
respond to those emotions. I coined the term "Augmented Emotional Intelligence" to describe the essential function of the device. I chose to use a haptic
device to give the user feedback, and employ
Affective Haptics and Emotion Elicitation
to indicate which emotions were being detected, and the intensity of those
emotions. The device would detect emotions that might cause the wearer
stress or anxiety, or potentially expose them to harm from others, primarily
frustration, anger and disgust. Not only would the device give the wearer
feedback but could also automatically call for help from a caregiver, in the
face of intense negative emotions, with a location, and a summary of the
detected emotions. The device could also include sensors to detect the
wearer's emotions, though this was not part of the scope of my initial
project.
Machine learning technology has come a long way in a very short time, and
Deep Neural Networks as they have been applied to Computer Vision, have
given us the ability to computationally efficiently recognize and classify
Faces in images and video. One of the sub-domains that this is being applied
to is Human Emotion Detection and Recognition. Neural Networks that use
Facial Coding features, which break the face into several groups of muscles,
can relatively accurately recognize primary emotions, though they are not
yet able to efficiently measure more subtle emotions or interpret
micro-expressions. Voice and speech Recognition, which have also hugely
benefited from the invention of Deep Neural Networks, can also be used to
recognize emotions, either on their own, or combined with video data, but
these significantly increase the overall computational costs of the models
or systems.
My initial, and unfortunately naïve goal was to use software-based machine
learning on a microcomputer to detect micro-expressions captured from an
attached camera. My initial attempt used a Raspberry Pi 3B, OpenMV, and
Python. I very quickly discovered that the Pi was woefully computationally
inadequate for running even the simplest ML models for primary emotion
detection, let alone micro-expression detection, which have durations of
less than 500ms. I resorted to using the embedded camera and microcomputer
for no more than facial detection and then once detected, sending the image
of the face to Microsoft’s Cognitive Services face/emotion detection
service. Though the results, even from the cloud service, were not adequate
for a real product, they were good enough to demonstrate what a real product
might look like.
In the evolution of this project I have developed versions of the device
based on multiple versions of the
Kendryte K210
based
Sipeed MAIX, which includes a Neural Network Accelerator; multiple versions of
NVIDIA’s Jetson platform, including the Jetson TX1, TX2 and Nano; and the
Google Coral TPU. I have also tried several pre-trained emotion detection
models, and models that I have trained myself using existing datasets. This
project has also motivated me to collaborate with researchers at Microsoft
Research who are working in similar areas, primarily in Accessibility.
Unfortunately, the technology is not yet at the point that a device with the
required specifications could be manufactured. Despite all the software
advances that have been made in Machine Learning, and the hardware advances
that have brought ML to embedded devices, the detection of micro-expressions
on a System- or Module-on-a-Chip is still not possible, and most
sophisticated emotion detection models will use 100% of the CPU, GPU and
memory of a modern, high-end engineering workstation.
But the software and hardware continue to improve at an ever-accelerating
rate. Models are now being trained on truly massive data sets, and new
techniques are being discovered that dramatically improve detection,
recognition and computational efficiency in general. The Internet of Things,
Edge AI, robotics, and autonomous vehicle research and development are
driving substantial hardware innovation in embedded AI.
At some point in the not-too-distant future my vision of an Augmented
Emotional Intelligence wearable that will meet the performance, reliability,
predictability and durability requirements for a device that will protect
vulnerable neurodiverse adults and children, will be realized. Until that
day I will continue to evolve my prototypes and apply my Maker mindset to
the problem.
Even if this product is never brought to market, it will continue to give me
a platform to talk about Neurodiversity and be an advocate and ally for
people with ASD. And that is a cause worth pursuing in its own right.