Skip to main content
 

The unnatural juxtaposition of plastic limbs and human flesh is a hard sight to grasp at first glance. This is exactly what prosthetics were trying to avoid for many years, but the importance has shifted from the aesthetic to the actual function of the model. The key component of this shift is a 3D printer, which uses small layers of plastic to make computer-rendered models into a reality in just a few hours. Because of the ability to customize and share these designs, 3D printing has made huge strides in the world of prosthetics with respect to efficiency, accessibility, and affordability over the years (Oppus et al. 2016). Yet, there still seems to be room for progress when it comes to the mechanical aspect of these models. This issue has prompted many studies that aim to make additions to 3D printed prosthetics in order to enhance them in some way, leading to the concept of a connection between the brain and these plastic components.

A study conducted by students at Ateneo de Manila University in the Philippines attempted to address a key problem: current arm prosthetic designs are typically only able to perform a grip and release motion (Oppus et al. 2016). Thus, these students not only wanted to increase the number of possible hand motions of the prosthetic but to also implement two new methods of control: brain-computer interface (BCI) and voice recognition module (VRM). The brain-computer interface in this study was an electroencephalogram (EEG) headband that detected neural patterns, but similar studies used sEMG sensors (Lonsdale et al. 2020) and other forms of EEG sensors (Bright et al. 2016) as the BCI connected to the 3D model. Although these seem like confusing elements, BCIs essentially work by recognizing brain activity, such as neurotransmitters and electrical impulses, and communicating that activity to the 3D prosthetic in a form that it will understand. Furthermore, the voice recognition module is a device that is capable of interpreting words and sending specific signals that correspond to each word, similar to Siri on iPhones.

So how did these researchers test the idea of connecting brain activity to artificial hand movements? As it turns out, they ran a relatively simple experiment. The most important part of the experiment was a controller that could interpret signals from both the MindWave brain-computer interface and a voice recognition module. Once it received a signal from either method, it would carry out whatever task that it recognized until signaling stopped and the loop closed (Oppus et al. 2016). These extra components aside, the actual 3D model of the hand had to be modified too; they did this to accommodate for servo motors that would increase the number of possible hand positions. Communication with the prosthetic was through a Radio Frequency USB dongle that received the signals from the BCI; moreover, the signals corresponded to a binary chart that changed the hand posture: one was for a relaxed finger, whereas zero corresponded to a bent finger. After setting up the prosthetic and connecting each component, the researchers had to physically test their model. In order to get a variety of results, trained and untrained subjects performed hand motions with the prosthetic. They repeated each motion twenty-five times and established an accuracy based on the outcome. Over the 475 total attempts with the MindWave BCI, they had a success rate of about 90% (Oppus et al. 2016). Comparatively, another study that used a different BCI did 300,000 complete attempts, or epochs, and found that the sensors had an “accuracy value suspended around 98%” (Lonsdale et al. 2020). These results are extremely promising for the future of these prosthetics because they prove that brain activity can be interpreted correctly at high accuracy over time. The overwhelming conclusion was that there can be a connection between the mind and a 3D printed prosthetic, which is part of what they were trying to prove. However, there is still plenty of room to improve on other issues such as accessibility and efficiency of design.

The authors failed to admit any mistakes or weak spots in their research; however, there are still some key drawbacks that have already been mentioned above. The most striking is that the 3D model has a large interface, or circuit board, that does not fit into the actual prosthetic. Furthermore, the MindWave module requires a headset that adds to the components of the prosthetic. These extra pieces contradict the mission of improving the accessibility and efficiency of 3D printed prosthetics. In their attempt to prove its plausibility, the researchers likely disregarded the logistics of the overall model, but it is important to note that improvements can be made here. The same study that performed 300,000 tests also has a better model that encases all the hardware and allows for the prosthetic to lift up to forty-five kilograms, proving it is possible to have a “one-piece” model (Lonsdale et al. 2020). Finally, the accuracy of the devices is greatly affected by whether a user is trained in utilizing them. Thus, the prosthetic will require a training program that further reduces the total accessibility for all people.

Flaws aside, what is interesting about this study is that it was done in response to a problem, rather than previous experiments. That problem is that 3D printed prosthetics do not have the ability to move in ways that would more closely resemble a real limb. Thus, the research is hoping to add hand motions such as those from rock, paper, scissors to the repertoire of the already effective 3D models. While this was their stated main goal, it was also important and more progressive that they explore the connection between the mind and synthetic body via a BCI or VRM. There have been myoelectric arm prosthetics that rely on surviving nerves in the patient’s arm, but they are useless when those nerves are impaired (Bright et al. 2016). Thus, models with a design as seen in the students’ experiment can alleviate this issue at a much lower price as well.

Technology has a huge role in the future, and this experiment undoubtedly proves that point. Most people have likely dreamt of having “mind control,” and this is part of the path to making it possible in real life. Even though the average person does not need a 3D printed prosthetic, the implications can be extrapolated to other parts of our lives: 3D printing allows for anyone to create something in just a matter of hours and a brain-computer interface can be integrated with devices that are extremely prevalent in society such as a smartphone or television. Whatever it may be, the possibilities are endless and could make life much more efficient.

This research is also very important because it demonstrates the importance of further exploring new products or ideas. Instead of taking the advantages of 3D printed prosthetics at face value, the researchers analyzed these models and identified their downfalls. They could then format an experiment that was aimed at correcting these identified weaknesses and provide genuinely useful results. These results, especially related to the MindWave interface and voice recognition module, can be applied universally to the medical field, psychological studies, or even new technology. Additionally, the action of 3D printing has a wide variety of applications; for example, 3D printing has been used in the medical field to provide dosage forms for certain drugs (Prasad et al. 2016). Now it is clearer that development should continue in the field of prosthetics because it has larger implications than those that are initially evident.

Overall, this study produced important results that point towards the viability of adding these methods of control in the future and hopefully changing the industry altogether. Prosthetics have been long overdue for a makeover, and there are many experiments that are helping progress the change. With the successes that have already been achieved, it is only a matter of time before the focus shifts from fixing a problem to making the resulting models as efficient and accessible as possible. It is exciting to see where prosthetics will be in the near future, and one can only hope that they provide the best alternative to a real limb.

 

 

References

Bright D, Nair A, Salvekar D, Bhisikar S. 2016. EEG-based brain controlled prosthetic arm.IEEE. p. 479–483.

Lonsdale D, Zhang L, Jiang R. 2020. 3D printed brain-controlled robot-arm prosthetic via embedded deep learning from sEMG sensors. arXiv:200501797.

Oppus C.M., Prado J.R.R., Escobar J.C., Mariñas J.A.G., Reyes R.S.J. 2016. Brain-computerinterface and voice-controlled 3D printed prosthetic hand. IEEE. p.2689–2693.

Prasad LK, Smyth H. 2016. 3D Printing technologies for drug delivery: a review. DrugDevelopment & Industrial Pharmacy. 42(7):1019–1031. doi:10.3109/03639045.2015.

 

Featured Image Source:

Google Images for free and fair reuse.

Print Friendly, PDF & Email
Comments are closed.