In this paper, multi-task learning is studied in the framework of regularization theory and kernel methods. It is shown that, for a suitable class of mixed effect kernels, multi-task learning from distributed datasets can be solved by an algorithm exhibiting a client-server structure, in which clients are associated with tasks, each of them coming with an individual database of examples. The role of the server is to collect examples in real-time from the clients and codify information in a common database, that is accessible to all the clients. Each client is able to exploit the coded information contained in the common database to compute its own estimated function but cannot access private data of other clients, thus preserving privacy. The new algorithm is illustrated through a simulated music recommendation problem.
View on arXiv