18
0

Personalized Control for Lower Limb Prosthesis Using Kolmogorov-Arnold Networks

Abstract

Objective: This paper investigates the potential of learnable activation functions in Kolmogorov-Arnold Networks (KANs) for personalized control in a lower-limb prosthesis. In addition, user-specific vs. pooled training data is evaluated to improve machine learning (ML) and Deep Learning (DL) performance for turn intent prediction.Method: Inertial measurement unit (IMU) data from the shank were collected from five individuals with lower-limb amputation performing turning tasks in a laboratory setting. Ability to classify an upcoming turn was evaluated for Multilayer Perceptron (MLP), Kolmogorov-Arnold Network (KAN), convolutional neural network (CNN), and fractional Kolmogorov-Arnold Networks (FKAN). The comparison of MLP and KAN (for ML models) and FKAN and CNN (for DL models) assessed the effectiveness of learnable activation functions. Models were trained separately on user-specific and pooled data to evaluate the impact of training data on their performance.Results: Learnable activation functions in KAN and FKAN did not yield significant improvement compared to MLP and CNN, respectively. Training on user-specific data yielded superior results compared to pooled data for ML models (p<0.05p < 0.05). In contrast, no significant difference was observed between user-specific and pooled training for DL models.Significance: These findings suggest that learnable activation functions may demonstrate distinct advantages in datasets involving more complex tasks and larger volumes. In addition, pooled training showed comparable performance to user-specific training in DL models, indicating that model training for prosthesis control can utilize data from multiple participants.

View on arXiv
@article{mohasel2025_2505.09366,
  title={ Personalized Control for Lower Limb Prosthesis Using Kolmogorov-Arnold Networks },
  author={ SeyedMojtaba Mohasel and Alireza Afzal Aghaei and Corey Pew },
  journal={arXiv preprint arXiv:2505.09366},
  year={ 2025 }
}
Comments on this paper