Shared Federated Learning Algorithm Based on Knowledge Distillation under Data Imbalance Scenario
在线阅读 下载全文 下载Pdf阅读器

Shared Federated Learning Algorithm Based on Knowledge Distillation under Data Imbalance Scenario

Traditional FL algorithms have certain security risks when dealing with highly sensitive and unbalanced data, which may lead to model performance degradation or data privacy disclosure.Therefore,to compensate for the shortcomings of traditional FL algorithms,this study introduced Knowledge distillation and LDP privacy protection mechanism to optimize the performance of FL algorithms,and ultimately constructed a privacy protection mechanism based on LDP-KD-FL algorithm.Through experimental analysis,it was found that in the comparison of communication volume,the gradient parameters of different algorithms showed an upward trend with the increase of communication volume.However,compared to the FedAvg algorithm,CentLearn algorithm,and DistLearn algorithm,the LDP-KD-FL algorithm had a slower increase in total communication volume.When the gradient parameter was 5000,the communication volume was still less than 40 KB.In the comparison of server runtime,when the gradient parameter was 2000,the LDP-KD-FL algorithm had a runtime of 152.8 ms.The data showed that when the number of communications was 16,the training accuracy of the LDP-KD-FL algorithm exceeded 80%.When the privacy budget was 0.5,the absolute error of LDP-KD-FL algorithm was 0.12,which was 0.32%,0.23%,and 0.29% lower than the absolute errors of FedAvg,CentLearn,and DistLearn algorithms,respectively.In summary,the LDP-KD-FL algorithm has low communication utility and a faster response in the server.

联系我们

  • 时间

    9;00-11:30 13:30-17:00

  • 电话

    00852-65557188

  • 邮箱

    sjkxcbs@126.com

  • QQ

    2662583009

  • 地址

    香港九龙新蒲岗太子道东704号新时代工贸商业中心31楼5-11室A03

友情链接

Copyright 2020-2035 世纪科学出版社 版权所有 All Rights Reserved     鲁ICP备2025175347号