This paper introduces Generalized Attention Flow (GAF), a novel feature attribution method for Transformer-based models to address the limitations of current approaches. By extending Attention Flow and replacing attention weights with the generalized Information Tensor, GAF integrates attention weights, their gradients, the maximum flow problem, and the barrier method to enhance the performance of feature attributions. The proposed method exhibits key theoretical properties and mitigates the shortcomings of prior techniques that rely solely on simple aggregation of attention weights. Our comprehensive benchmarking on sequence classification tasks demonstrates that a specific variant of GAF consistently outperforms state-of-the-art feature attribution methods in most evaluation settings, providing a more reliable interpretation of Transformer model outputs.
View on arXiv@article{azarkhalili2025_2502.15765, title={ Generalized Attention Flow: Feature Attribution for Transformer Models via Maximum Flow }, author={ Behrooz Azarkhalili and Maxwell Libbrecht }, journal={arXiv preprint arXiv:2502.15765}, year={ 2025 } }