Tree-based Focused Web Crawling with Reinforcement Learning

A focused crawler aims at discovering as many web pages and web sites relevant to a target topic as possible, while avoiding irrelevant ones. Reinforcement Learning (RL) has been a promising direction for optimizing focused crawling, because RL can naturally optimize the long-term profit of discovering relevant web locations within the context of a reward. In this paper, we propose TRES, a novel RL-empowered framework for focused crawling that aims at maximizing both the number of relevant web pages (aka \textit{harvest rate}) and the number of relevant web sites (\textit{domains}). We model the focused crawling problem as a novel Markov Decision Process (MDP), which the RL agent aims to solve by determining an optimal crawling strategy. To overcome the computational infeasibility of exhaustively searching for the best action at each time step, we propose Tree-Frontier, a provably efficient tree-based sampling algorithm that adaptively discretizes the large state and action spaces and evaluates only a few representative actions. Experimentally, utilizing online real-world data, we show that TRES significantly outperforms and Pareto-dominates state-of-the-art methods in terms of harvest rate and the number of retrieved relevant domains, while it provably reduces by orders of magnitude the number of URLs needed to be evaluated at each crawling step.
View on arXiv@article{kontogiannis2025_2112.07620, title={ Tree-based Focused Web Crawling with Reinforcement Learning }, author={ Andreas Kontogiannis and Dimitrios Kelesis and Vasilis Pollatos and George Giannakopoulos and Georgios Paliouras }, journal={arXiv preprint arXiv:2112.07620}, year={ 2025 } }