Traditional FL algorithms have certain security risks when dealing with highly sensitive and unbalanced data, which may lead to model performance degradation or data privacy disclosure.Therefore,to compensate for the shortcomings of traditional FL algorithms,this study introduced Knowledge distillation and LDP privacy protection mechanism to optimize the performance of FL algorithms,and ultimately constructed a privacy protection mechanism based on LDP-KD-FL algorithm.Through experimental analysis,it was found that in the comparison of communication volume,the gradient parameters of different algorithms showed an upward trend with the increase of communication volume.However,compared to the FedAvg algorithm,CentLearn algorithm,and DistLearn algorithm,the LDP-KD-FL algorithm had a slower increase in total communication volume.When the gradient parameter was 5000,the communication volume was still less than 40 KB.In the comparison of server runtime,when the gradient parameter was 2000,the LDP-KD-FL algorithm had a runtime of 152.8 ms.The data showed that when the number of communications was 16,the training accuracy of the LDP-KD-FL algorithm exceeded 80%.When the privacy budget was 0.5,the absolute error of LDP-KD-FL algorithm was 0.12,which was 0.32%,0.23%,and 0.29% lower than the absolute errors of FedAvg,CentLearn,and DistLearn algorithms,respectively.In summary,the LDP-KD-FL algorithm has low communication utility and a faster response in the server.