Loading...
dcyphr | Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data

Abstract

Deep learning, which is when a machine learns how to perform a task, has a lot of potential in medicine as a tool for diagnostics. However, there have been privacy issues and lack of diverse data. So, many institutions do not use deep learning. Federated learning is a new model that allows institutions to collaborate without sharing patient data. Federated learning is almost as effective as institutions sharing data with one another. This new model can allow even larger amounts of data to be analyzed.

Aims

The authors aim to discuss how effective federated learning can be in medicine. They compare federated learning to pre-existing models.

Introduction

Many machine learning models can only be applied to an individual institution. Each institution has their own biases, such as patient demographics, so models cannot be generalized. Collaboration will improve models among institutions. 


This paper compares four main collaborative systems. First, there is traditional sharing, which is known as collaborative data sharing (CDS). Some institutions give their patient data to one institution. This one institution compiles all of the data to create a final model. But, with an increased number of institutions, there are privacy concerns and other issues.


A second model is federated learning (FL). All institutions are given a model to test out with their own patient data. The institutions give their own updated model to a server, which combines all of the models. The server then gives the final model back to the institutions to use on their data. All of the institutions run and update the model at the same time. This cycle can be continuously repeated to keep the model up to date without sharing their data. The researchers wanted to see if not sharing data in federated learning would make the model less accurate. However, compared to CDS, the benefits outweigh the costs.


A third model is institutional incremental learning (IIL). An institution receives a preliminary model. It updates the model using its own data before passing this new model to another institution. The next institution updates the model and passes it on. This continues until a final model is created. The fourth model is cyclic institutional incremental learning (CIIL). It is the same as IIL, except the last institution may give the final model to the first institution. Then, the first institution may update the final model before passing it on Thus, the cycle is repeated a number of times to keep the model up to date (Figure 1). Past studies demonstrated that CIIL is more effective than IIL. CIIL prevents the machine from forgetting so much information compared to IIL. However, forgetting still occurs in CIIL, especially with a large number of institutions. The final model will more closely resemble institutions that had most recently run their data through it rather than an equal combination of all the institutions.

Results

In general, the more diverse and large the data set from an institution, the more accurate the model (Figure 2). Furthermore, collaboration among institutions leads to better models than a model that relies only on a single institution (Figure 3). The advantage is apparent when the researchers conducted a Leave-One-(institution)-Out (LOO) test. In a LOO test, the researchers exclude one of the ten institutions when generating the model. Collaborative models had more generalizable models compared to only single institution models in the LOO test (Table 1). 


Federated learning models learn almost as fast as collaborative data sharing. CIIL and IIL lag behind in how quickly they can produce a generalizable model. The researchers calculated epochs for this. An epoch is a cycle that the method goes through to create a final model. In the case of FL, an epoch is each time all the institutions update the model at once. In CIIL, an epoch is a cycle (Figure 4). Also, CIIL and IIL are unstable in their accuracy since their learning curves are jagged. This is because CIIL and IIL depend mostly on the last institution that updated the model. Thus, a particular model is only available to two institutions: the one that updated the model and the institution they pass it to. This is inefficient as the overall model never fully encompasses all institutions. Thus, federated learning is nearly as effective as institutions sharing data. Federated learning is also more effective than CIIL and IIL.

Discussion

Federated learning shows that it is possible to create accurate models without sharing patient data. FL holds an advantage over collaborative data sharing because patient data is not exchanged. This is important as there are regulations, such as HIPAA, that protect patient privacy. Data will remain at the original institution. With privacy protected, more institutions will be able to collaborate, which will help build more generalizable models.


There may be concerns that the initial FL models may be biased since the model is only trained from one institution. In CDS, the model is trained based on data from multiple institutions. But, FL combines the models, which removes the bias.


More research needs to be done on how FL can be used for data other than radiographic images. Such data may include clinical notes or genomic data. Another possible limitation and risk are hackers. It may be possible to determine some of the data by working backwards from the model. Hackers may also be able alter data or the training, which will violate privacy. More studies need to be done to prevent this.

Methods

The researchers obtained data from the International Brain Tumor Segmentation (BraTS) challenge to run through the models. BraTS consists of brain tumor data from ten institutions. The model attempts to differentiate brain tumors from normal tissue in MRI scans. The researchers first generated models for this function based on the four models mentioned in the introduction. To see how accurate or generalizable a model is, the researchers tested these models against a validation data set. This data set was predetermined, and the researchers determined if the model could provide accurate predictions.

Conclusion

Federated learning allows institutions to use an accurate, generalizable model to analyze their data. Institutions do not have to share their own patient data, preventing privacy issues. All of the institutions will contribute to a new model, creating a diverse data set. Federated learning may help to advance many fields of medicine.