ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.02627
26
3

Stability Analysis of Various Symbolic Rule Extraction Methods from Recurrent Neural Network

4 February 2024
Neisarg Dave
Daniel Kifer
C. Lee Giles
A. Mali
ArXivPDFHTML
Abstract

This paper analyzes two competing rule extraction methodologies: quantization and equivalence query. We trained 360036003600 RNN models, extracting 180001800018000 DFA with a quantization approach (k-means and SOM) and 360036003600 DFA by equivalence query(L∗L^{*}L∗) methods across 101010 initialization seeds. We sampled the datasets from 777 Tomita and 444 Dyck grammars and trained them on 444 RNN cells: LSTM, GRU, O2RNN, and MIRNN. The observations from our experiments establish the superior performance of O2RNN and quantization-based rule extraction over others. L∗L^{*}L∗, primarily proposed for regular grammars, performs similarly to quantization methods for Tomita languages when neural networks are perfectly trained. However, for partially trained RNNs, L∗L^{*}L∗ shows instability in the number of states in DFA, e.g., for Tomita 5 and Tomita 6 languages, L∗L^{*}L∗ produced more than 100100100 states. In contrast, quantization methods result in rules with number of states very close to ground truth DFA. Among RNN cells, O2RNN produces stable DFA consistently compared to other cells. For Dyck Languages, we observe that although GRU outperforms other RNNs in network performance, the DFA extracted by O2RNN has higher performance and better stability. The stability is computed as the standard deviation of accuracy on test sets on networks trained across 101010 seeds. On Dyck Languages, quantization methods outperformed L∗L^{*}L∗ with better stability in accuracy and the number of states. L∗L^{*}L∗ often showed instability in accuracy in the order of 16%−22%16\% - 22\%16%−22% for GRU and MIRNN while deviation for quantization methods varied in 5%−15%5\% - 15\%5%−15%. In many instances with LSTM and GRU, DFA's extracted by L∗L^{*}L∗ even failed to beat chance accuracy (50%50\%50%), while those extracted by quantization method had standard deviation in the 7%−17%7\%-17\%7%−17% range. For O2RNN, both rule extraction methods had deviation in the 0.5%−3%0.5\% - 3\%0.5%−3% range.

View on arXiv
Comments on this paper