Teaching and compressing for low VC-dimension

In this work we study the quantitative relation between VC-dimension and two other basic parameters related to learning and teaching. Namely, the quality of sample compression schemes and of teaching sets for classes of low VC-dimension. Let be a binary concept class of size and VC-dimension . Prior to this work, the best known upper bounds for both parameters were , while the best lower bounds are linear in . We present significantly better upper bounds on both as follows. Set . We show that there always exists a concept in with a teaching set (i.e. a list of -labeled examples uniquely identifying in ) of size . This problem was studied by Kuhlmann (1999). Our construction implies that the recursive teaching (RT) dimension of is at most as well. The RT-dimension was suggested by Zilles et al. and Doliwa et al. (2010). The same notion (under the name partial-ID width) was independently studied by Wigderson and Yehudayoff (2013). An upper bound on this parameter that depends only on is known just for the very simple case , and is open even for . We also make small progress towards this seemingly modest goal. We further construct sample compression schemes of size for , with additional information of bits. Roughly speaking, given any list of -labelled examples of arbitrary length, we can retain only labeled examples in a way that allows to recover the labels of all others examples in the list, using additional information bits. This problem was first suggested by Littlestone and Warmuth (1986).
View on arXiv