The current landscape of advanced prosthetics is defined by the paradox of immense capability at a prohibitively high cost. We believe that the convergence of affordable edge computing and modern AI frameworks allows for a new paradigm of a prosthetic that is not only accessible but also intelligent, learning and adapting to become a true extension of the user.
The global prosthetics market is a $1.2 billion industry, yet it fails a significant portion of its users. There are an estimated 40 million amputees globally, with 2 million in the US alone who have upper limb loss.
The most advanced solutions from industry leaders like Ottobock and Touch Bionics can exceed $50,000, creating a massive barrier to access. Furthermore, these devices are typically calibrated once, lacking the ability to adapt to a user's changing physiology or usage patterns. Cheaper alternatives, like those from Open Bionics, often sacrifice advanced functionality.
Our approach is built on a carefully selected hardware and software stack designed for real-time performance, adaptability, and affordability.
The process begins with a specialized bio-signal sensor, such as an electromyography (EMG) sensor, chosen for its signal quality and adaptability. The raw analog signal is pre-processed directly by the sensor's hardware, undergoing amplification, rectification, and initial filtering. To manage signal noise, we use a multi-pronged approach that combines both hardware-based techniques (like physical shielding) and software-based filters (such as various digital signal processing algorithms).
A compact, high-performance computing module acts as the onboard processor, providing the necessary computational power to run AI models in real time. To connect with and interpret signals from analog sensors, a dedicated Analog-to-Digital Converter (ADC) is used. This ADC converts the sensor's analog output into a digital format that the main processor can understand, using a standard communication protocol like I2C.
Our most significant innovation is the continuous learning AI. We utilize a hybrid model.
Excels at identifying spatial patterns in the EMG data to classify distinct gestures.
Processes the temporal sequence of those patterns, allowing the system to understand gestures in the context of movement over time.
Crucially, this model is not static. We've designed the following workflow for it to learn and personalize over time.
During the first few days, the user performs a series of gestures to provide baseline data, fine-tuning the base model in a supervised learning phase.
The model then enters a continuous learning phase. It learns only from actions confirmed by the user, using a replay buffer to periodically retrain, for example nightly during recharge. This prevents model "drift" while allowing it to adapt to the user's strengthening muscles or changing patterns.
Updates only occur when the model's confidence is low and the user confirms the action was correct. Users will also have options to roll back to a previous version or initiate a full recalibration if needed.
We recognize that technology alone is not enough. Our go-to-market strategy is designed to build momentum and navigate the complex regulatory landscape.
We plan a multi-phase launch, beginning with a direct-to-consumer model to build an initial user base, followed by partnerships with prosthetists, and ultimately working toward the insurance reimbursement pathway.
We anticipate the device will be regulated as a Class II medical device by the FDA. Our strategy is to pursue a 510(k) pathway, demonstrating equivalency to existing approved prosthetics, which will require clinical trials to establish safety and efficacy.