Popis: |
Owing to their fast convergence, second-order Newton-type learning methods have recently received attention in the federated learning (FL) setting. However, current solutions are based on communicating the Hessian matrices from the devices to the parameter server, at every iteration, incurring a large number of communication rounds; calling for novel communication-efficient Newton-type learning methods. In this article, we propose a novel second-order Newton-type method that, similarly to its first-order counterpart, requires every device to share only a model-sized vector at each iteration while hiding the gradient and Hessian information. In doing so, the proposed approach is significantly more communication-efficient and privacy-preserving. Furthermore, by leveraging the over-the-air aggregation principle, our method inherits privacy guarantees and obtains much higher communication efficiency gains. In particular, we formulate the problem of learning the inverse Hessian-gradient product as a quadratic problem that is solved in a distributed way. The framework alternates between updating the inverse Hessian-gradient product using a few alternating direction method of multipliers (ADMM) steps, and updating the global model using Newton’s method. Numerical results show that our proposed approach is more communication-efficient and scalable under noisy channels for different scenarios and across multiple datasets. |