Water from Two Rocks: Maximizing the Mutual Information

We innovatively build a natural connection between the learning problem, co-training, and the peer prediction style mechanism design problem and address them simultaneously using the same information theoretic approach. Learning: Based on two families of information measurements---mutual information, proper scoring rules, we reduce the learning problem to two families of optimization problems and show that the maximizer of the optimization problem corresponds to the Bayesian posterior predictor--the predictor that maps any input information to its Bayesian posterior forecast for --. To the best of our knowledge, this is the first optimization goal in the co-training literature that guarantees that the maximizer corresponds to the Bayesian posterior predictor, without any additional assumption. Mechanism design: We design mechanisms that elicit high quality forecasts without verification and have instant rewards for agents by assuming the agents' information is independent conditioning on . In the single-task setting, we propose a forecast elicitation mechanism where truth-telling is a strict equilibrium; in the multi-task setting, we propose a family of forecast elicitation mechanisms where truth-telling is a strict equilibrium and pays better than any other equilibrium.
View on arXiv