On the impact of measure pre-conditionings on general parametric ML models and transfer learning via domain adaptation

Abstract
We study a new technique for understanding convergence of learning agents under small modifications of data. We show that such convergence can be understood via an analogue of Fatou's lemma which yields gamma-convergence. We show it's relevance and applications in general machine learning tasks and domain adaptation transfer learning.
View on arXivComments on this paper