Magnetic Sensing for Human-Computer Interaction

Nishita Deka, Richy Yun
-
October 2024

TL;DR:

  • Magnetic muscle sensing (MMG) is a promising modality for building a universally accessible gesture-based control system for immersive interfaces.
  • Sonera's S1 chip will enable MMG in form factors like smartwatches, smart glasses and earbuds.
  • We’ll share more about the technology behind the S1 in the coming months and its near-term capabilities.

Today we’re sharing a preprint titled ‘Generalizable gesture recognition using magnetomyography,’ an exploration into the use of magnetic muscle sensing to build a more universal, intuitive and advanced way of interacting with machines – through natural hand movement. This is a continuation of our efforts to exploit magnetic sensing for a key application we’re excited about at Sonera: gesture control for personal computing.

Using gestures to interact with immersive interfaces

A notable trend happening in personal computing is a shift to more immersive and touchless realities. In the last few weeks alone, Meta and Snap shared demos of their AR glasses, highlighting a future of computing beyond smartphones and laptops. A key ingredient needed to unlock the new experiences that can be built with these interfaces is a correspondingly immersive and seamless bridge between the physical and digital world.

A popular method being explored today is gesture recognition, or the use of natural hand movements to enable digital control. We’ve already seen this implemented with cameras or inertial sensors – approaches that work pretty well for translating basic gestures (say a tap or drag) into simple digital controls. Unfortunately, being limited to basic gestures doesn’t do justice to the immersive experiences promised by these new interfaces and doesn’t come close to what we can do with our hands in the physical world.

A method that’s been explored more heavily in recent years is the direct recording of muscle activity using electrical sensors, or surface electromyography (sEMG). The benefits of this approach are two-fold: 1) it offers a way to capture more subtle gesture information by directly recording muscle contractions and 2) it does away with the need for cameras altogether, so any concerns about lighting or visibility of the hands become irrelevant.

We’ve seen some incredible demos of gesture control using sEMG: typing without a physical keyboard, moving a cursor on a computer, controlling a prosthetic arm and even deciphering sign language. This proves that it’s possible to enable capabilities previously deemed infeasible and lays the groundwork for an entirely new way for humans to interact with machines.

But building a gesture-based control system is far from trivial. One of the biggest challenges is guaranteeing that it works well across a wide user base; if it’s not easy for everyone to use, it probably won’t be used much at all. Continued advancements in sensor technologies and machine learning algorithms are critical for making this happen.

The power of magnetic sensing for gesture recognition

This is where we’re excited about the role magnetic sensing can play in advancing gesture recognition for the next wave of human-computer interfaces.

A few months ago, we released a preprint showing that magnetic muscle sensing, or MMG, can measure muscle signals with higher fidelity than sEMG, which tends to be heavily impacted by the conductivity of human tissue. Magnetic sensing is also insensitive to changes in humidity and skin conditions, such as sweat or hair. Together, these factors point to MMG as a way to build a more generalizable system for gesture recognition, given its reduced variability coming from physiological differences between individuals and changes in skin conductivity. So, we decided to test that hypothesis directly.

What we learned about MMG for gesture recognition

In this work, we developed wristbands with third party magnetic sensors (as per the previous work, we operated in ideal conditions) and had subjects perform various gestures. We implemented a signal processing pipeline to classify the gestures and tested how well the classification could generalize between participants – i.e. can a model trained on some participants work on another, unrelated participant? Here are the key takeaways from our work:

1. MMG can be used to classify gestures with performance rivaling cutting-edge techniques. We achieved classification accuracy of 95.4% across 9 gestures, comparable to state-of-the-art accuracy with sEMG, making MMG an equally viable approach for gesture recognition.

2. MMG signals contain information that is highly valuable for gesture classification. In our analysis, we found that signals at high frequencies (>100 Hz) provide significant information that improves classification accuracy. We also know from our previous work that MMG-based data contains cleaner high frequency content than sEMG. That means MMG is especially great at collecting the type of data needed for a more generalizable gesture control system.

3. Wrist recordings had the best performance. We looked at data recorded from both the wrist and forearm and, similar to sEMG, found that data from the wrist resulted in significantly higher classification accuracy than data from the forearm. This makes MMG well-suited for enabling complex capabilities in wrist-worn devices, like smartwatches or fitness trackers.

4. MMG is generalizable. We were able to generalize both across sessions (different sessions for the same user) and across subjects (different users). That means MMG-powered gesture recognition can be enabled for everyone out-of-the-box with little to no calibration.  

5. Sensor technology sets upper limits on performance. For all these experiments, we used a non-ideal recording system with limited bandwidth and more than one saturated channel per session on average. We expect better performance with sensors that can be placed closer to the body, can be arranged in dense arrays for more channels in a smaller area, and offer larger bandwidth and dynamic range.

What’s next: The S1

The work here, coupled with our previous studies on MMG, strengthens our conviction that MMG can unlock the next wave of human-computer interfaces. The missing piece is building a sensor that can enable MMG at scale. This brings us back to our work at Sonera, where we're building the S1 chip to enable MMG at scale.

Based on the work shared today, it’s easy to envision the S1 in a wrist-wearable device for gesture control. But what’s even more exciting is that the S1 allows MMG to be leveraged in other form factors as well (like smart glasses or earbuds), which could lead to the development of novel control schemes based on facial muscle movement or sub-vocal speech. With the S1, an entirely new class of products for XR gesture control becomes possible. 

Next up, we’ll do a deep dive on the technology behind the S1 – what exactly is it that makes it possible for us to enable magnetic muscle sensing at scale? And how does this further our mission to make neural data ubiquitous? We’ll be sharing more on these topics in the coming months as we get closer to making our vision a reality.

--

Nishita Deka, CEO

Richy Yun, Lead Neuroscientist