ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.00206
16
1

An Investigation of Multi-feature Extraction and Super-resolution with Fast Microphone Arrays

30 September 2023
Eric T. Chang
Runsheng Wang
Peter Ballentine
Jingxi Xu
Trey Smith
Brian Coltin
Ioannis Kymissis
M. Ciocarlie
ArXivPDFHTML
Abstract

In this work, we use MEMS microphones as vibration sensors to simultaneously classify texture and estimate contact position and velocity. Vibration sensors are an important facet of both human and robotic tactile sensing, providing fast detection of contact and onset of slip. Microphones are an attractive option for implementing vibration sensing as they offer a fast response and can be sampled quickly, are affordable, and occupy a very small footprint. Our prototype sensor uses only a sparse array (8-9 mm spacing) of distributed MEMS microphones (<1,3.76x2.95x1.10mm)embeddedunderanelastomer.Weusetransformer−basedarchitecturesfordataanalysis,takingadvantageofthemicrophones′highsamplingratetorunourmodelsontime−seriesdataasopposedtoindividualsnapshots.Thisapproachallowsustoobtain77.3averageaccuracyon4−classtextureclassification(84.2slowestdragvelocity),1.8mmmeanerroroncontactlocalization,and5.6mm/smeanerroroncontactvelocity.Weshowthatthelearnedtextureandlocalizationmodelsarerobusttovaryingvelocityandgeneralizetounseenvelocities.Wealsoreportthatoursensorprovidesfastcontactdetection,animportantadvantageoffasttransducers.ThisinvestigationillustratesthecapabilitiesonecanachievewithaMEMSmicrophonearrayalone,leavingvaluablesensorrealestateavailableforintegrationwithcomplementarytactilesensingmodalities.1, 3.76 x 2.95 x 1.10 mm) embedded under an elastomer. We use transformer-based architectures for data analysis, taking advantage of the microphones' high sampling rate to run our models on time-series data as opposed to individual snapshots. This approach allows us to obtain 77.3% average accuracy on 4-class texture classification (84.2% when excluding the slowest drag velocity), 1.8 mm mean error on contact localization, and 5.6 mm/s mean error on contact velocity. We show that the learned texture and localization models are robust to varying velocity and generalize to unseen velocities. We also report that our sensor provides fast contact detection, an important advantage of fast transducers. This investigation illustrates the capabilities one can achieve with a MEMS microphone array alone, leaving valuable sensor real estate available for integration with complementary tactile sensing modalities.1,3.76x2.95x1.10mm)embeddedunderanelastomer.Weusetransformer−basedarchitecturesfordataanalysis,takingadvantageofthemicrophones′highsamplingratetorunourmodelsontime−seriesdataasopposedtoindividualsnapshots.Thisapproachallowsustoobtain77.3averageaccuracyon4−classtextureclassification(84.2slowestdragvelocity),1.8mmmeanerroroncontactlocalization,and5.6mm/smeanerroroncontactvelocity.Weshowthatthelearnedtextureandlocalizationmodelsarerobusttovaryingvelocityandgeneralizetounseenvelocities.Wealsoreportthatoursensorprovidesfastcontactdetection,animportantadvantageoffasttransducers.ThisinvestigationillustratesthecapabilitiesonecanachievewithaMEMSmicrophonearrayalone,leavingvaluablesensorrealestateavailableforintegrationwithcomplementarytactilesensingmodalities.

View on arXiv
Comments on this paper