Enabling privacy-preserving AI training on everyday devices

Arina Makeeva Avatar
Illustration

In a groundbreaking development from researchers at the Massachusetts Institute of Technology (MIT), a new method has emerged that accelerates privacy-preserving artificial intelligence (AI) training by approximately 81%. This innovation holds great promise for a range of resource-constrained edge devices, such as sensors and smartwatches, enabling them to deploy more accurate AI models while ensuring user data remains secure.

The method enhances the efficiency of federated learning, a technique that allows a network of connected devices to collaboratively train a shared AI model. In this setup, an AI model is sent from a central server to various connected devices, which then utilize their local data to train the model and return updates to the server. The pivotal advantage here is that raw data stays on the individual devices, thereby preserving user privacy.

However, limitations arise due to varying computational capacities, memory constraints, and network connectivity of the devices involved. Many edge devices—ranging from smartwatches to wireless sensors—are often unable to manage the demands of storing, training, and transmitting data back to the server in a timely manner, leading to performance inefficiencies.

The researchers at MIT addressed these challenges by developing a method that effectively handles a heterogeneous network of wireless devices, each with its unique limitations. By doing so, they significantly reduce the time lag often experienced during the training process. This advancement could lead to wider adoption of AI models in critical fields, including healthcare and finance, where stringent security and privacy protocols are paramount.

“This work is about bringing AI to small devices where it is not currently possible to run these kinds of powerful models,” says Irene Tenison, an electrical engineering and computer science graduate student and the paper’s lead author. “We carry these devices around with us in our daily lives. We need AI to be able to run on these devices, not just on giant servers and GPUs, and this work is an important step toward enabling that.”

Other contributors to this significant work include Anna Murphy, a machine-learning engineer at Lincoln Laboratory; Charles Beauville, a visiting student from the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland; and Lalana Kagal, a principal research scientist in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). The findings of this research will be presented at the upcoming IEEE International Joint Conference on Neural Networks.

Many existing federated learning methodologies operate under the assumption that all participating devices in the network possess sufficient memory to accommodate the full AI model, coupled with reliable connectivity to promptly relay updates back to the central server. Unfortunately, these generalizations fail to capture the reality of diverse devices in practical applications.

In heterogeneous device networks comprised of mobile phones, smartwatches, and wireless sensors, it is common to encounter limitations in terms of memory and computational prowess, alongside potential connectivity interruptions. The conventional training process typically involves the central server awaiting updates from all connected devices before it averages them to finalize the training round. This leads to notable lag and inefficiencies, which can significantly slow down the training procedure.

This MIT-led innovation not only promises to reduce lag time but also aims to make deploying advanced AI functionalities more practical and efficient, ultimately driving innovation in AI applications across various sectors.

As AI continues to evolve, the importance of maintaining user privacy while utilizing sophisticated machine learning techniques grows ever more vital. With this new approach, MIT researchers have taken a crucial step forward in enabling powerful AI capabilities directly on everyday devices, blending ease of use with stringent privacy standards.

Leave a Reply

Your email address will not be published. Required fields are marked *