A review of Deep Learning Privacy, Security and Defenses

Main Article Content

Afrah Salman Dawood
Noor Kadhim Hadi

Abstract

Deep learning (DL) can be considered as a powerful tool in different fields and for different applications but its importance raised the concern about privacy, security, and defense issues. This research presents an important overview about different aspects and state-of-the-art techniques in DL privacy, security, and defense. Wide range of topics was covered including private data frameworks, different types of threats and attacks, and the most important defense techniques. We have also discussed the challenges and limitations of each approach besides to possible future research directions. This survey can be considered as a comprehensive guide for other researchers and policymakers who are interested in understanding these important topics associated with DL.


Article Details

How to Cite
Dawood, A. S., & Hadi, N. K. (2023). A review of Deep Learning Privacy, Security and Defenses. Technium: Romanian Journal of Applied Sciences and Technology, 12, 65–83. https://doi.org/10.47577/technium.v12i.9471
Section
Articles

References

A. S. Dawood, “Machine Learning and Artificial Neural Network for Data Mining Classification and Prediction of Brain Diseases,” International Journal of Reasoning-based Intelligent Systems, vol. 1, no. 1, p. 1, 2023, doi: 10.1504/IJRIS.2023.10052940.

B. Shickel, P. Tighe, … A. B.-I. journal of, and undefined 2017, “Deep EHR: a survey of recent advances in deep learning techniques for electronic health record (EHR) analysis,” ieeexplore.ieee.org, Accessed: Mar. 15, 2023. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/8086133/

M. I. Tariq et al., “A Review of Deep Learning Security and Privacy Defensive Techniques,” Mobile Information Systems, vol. 2020, 2020, doi: 10.1155/2020/6535834.

J Schmidhuber, “Deep learning in neural networks: An overview,” Elsevier, 2015, Accessed: Mar. 15, 2023. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0893608014002135

M. S. Riazi, B. Darvish Rouani, and F. Koushanfar, “Deep Learning on Private Data,” IEEE Secur Priv, vol. 17, no. 6, pp. 54–63, Nov. 2019, doi: 10.1109/MSEC.2019.2935666.

D. S. Berman, A. L. Buczak, J. S. Chavis, and C. L. Corbett, “A Survey of Deep Learning Methods for Cyber Security,” Information 2019, Vol. 10, Page 122, vol. 10, no. 4, p. 122, Apr. 2019, doi: 10.3390/INFO10040122.

A. L. Buczak and E. Guven, “A Survey of Data Mining and Machine Learning Methods for Cyber Security Intrusion Detection,” IEEE Communications Surveys and Tutorials, vol. 18, no. 2, pp. 1153–1176, Apr. 2016, doi: 10.1109/COMST.2015.2494502.

J. M. Torres, … C. I. C.-I. J. of, and undefined 2019, “Machine learning techniques applied to cybersecurity,” Springer, Accessed: Mar. 15, 2023. [Online]. Available: https://link.springer.com/article/10.1007/s13042-018-00906-1

T. Nguyen, & G. A.-I. communications surveys, and undefined 2008, “A survey of techniques for internet traffic classification using machine learning,” ieeexplore.ieee.org, vol. 10, no. 4, pp. 56–76, Dec. 2008, doi: 10.1109/SURV.2008.080406.

A. Buczak, E. G.-I. C. surveys & tutorials, and undefined 2015, “A survey of data mining and machine learning methods for cyber security intrusion detection,” ieeexplore.ieee.org, Accessed: Mar. 15, 2023. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/7307098/

S. X. Wu and W. Banzhaf, “The use of computational intelligence in intrusion detection systems: A review,” Applied Soft Computing Journal, vol. 10, no. 1, pp. 1–35, Jan. 2010, doi: 10.1016/J.ASOC.2009.06.019.

J. Martínez Torres, C. Iglesias Comesaña, and P. J. García-Nieto, “Review: machine learning techniques applied to cybersecurity,” International Journal of Machine Learning and Cybernetics, vol. 10, no. 10, pp. 2823–2836, Oct. 2019, doi: 10.1007/S13042-018-00906-1.

S. Wu, W. B.-A. soft computing, and undefined 2010, “The use of computational intelligence in intrusion detection systems: A review,” Elsevier, 2008, Accessed: Mar. 15, 2023. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1568494609000908

D. Usynin, D. Rueckert, and G. Kaissis, “Beyond Gradients: Exploiting Adversarial Priors in Model Inversion Attacks,” Proceedings on Privacy Enhancing Technologies, pp. 1–18.

M. Malekzadeh, A. Borovykh, and D. Gündüz, “Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers’ Outputs; Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers’ Outputs”, Accessed: Mar. 15, 2023. [Online]. Available: https://github.com/mmalekzadeh/honest-but-curious-nets.

A. Moradi, N. K. D. Venkategowda, S. Pouria Talebi, and S. Werner, “Distributed Kalman Filtering with Privacy against Honest-but-Curious Adversaries”.

P. Mohassel and Y. Zhang, “SecureML: A System for Scalable Privacy-Preserving Machine Learning,” Proc IEEE Symp Secur Priv, pp. 19–38, Jun. 2017, doi: 10.1109/SP.2017.12.

E. Boyle, N. Gilboa, and Y. Ishai, “Secure Computation with Preprocessing via Function Secret Sharing,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11891 LNCS, pp. 341–371, 2019, doi: 10.1007/978-3-030-36030-6_14/FIGURES/1.

N. Carlini, M. Jagielski, and I. Mironov, “Cryptanalytic extraction of neural network models,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 12172 LNCS, pp. 189–218, 2020, doi: 10.1007/978-3-030-56877-1_7.

C. Zhang, Y. Xie, H. Bai, B. Yu, W. Li, and Y. Gao, “A survey on federated learning,” Knowl Based Syst, vol. 216, p. 106775, Mar. 2021, doi: 10.1016/J.KNOSYS.2021.106775.

Z. Yu, J. Hu, G. Min, Z. Wang, W. Miao, and S. Li, “Privacy-Preserving Federated Deep Learning for Cooperative Hierarchical Caching in Fog Computing,” IEEE Internet Things J, vol. 9, no. 22, pp. 22246–22255, Nov. 2022, doi: 10.1109/JIOT.2021.3081480.

P. Kairouz et al., “Advances and Open Problems in Federated Learning,” Foundations and Trends® in Machine Learning, vol. 14, no. 1–2, pp. 1–210, Jun. 2021, doi: 10.1561/2200000083.

K. Bonawitz et al., “Practical secure aggregation for privacy-preserving machine learning,” Proceedings of the ACM Conference on Computer and Communications Security, pp. 1175–1191, Oct. 2017, doi: 10.1145/3133956.3133982.

J. Liu, M. Juuti, Y. Lu, and N. Asokan, “Oblivious neural network predictions via MiniONN transformations,” Proceedings of the ACM Conference on Computer and Communications Security, pp. 619–631, Oct. 2017, doi: 10.1145/3133956.3134056.

B. Darvish et al., “DeepSecure: Scalable provably-secure deep learning,” Proc Des Autom Conf, vol. Part F137710, Jun. 2018, doi: 10.1145/3195970.3196023.

M. Sadegh Riazi, E. M. Songhori, C. Weinert, T. Schneider, O. Tkachenko, and F. Koushanfar, “Chameleon: A hybrid secure computation framework for machine learning applications,” ASIACCS 2018 - Proceedings of the 2018 ACM Asia Conference on Computer and Communications Security, vol. 18, pp. 707–721, May 2018, doi: 10.1145/3196494.3196522.

L. Fan et al., “Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart Privacy Attacks,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 12500 LNCS, pp. 32–50, 2020, doi: 10.1007/978-3-030-63076-8_3/FIGURES/8.

C. Rechberger and R. Walch, “Privacy-Preserving Machine Learning Using Cryptography,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 13049 LNCS, pp. 109–129, 2022, doi: 10.1007/978-3-030-98795-4_6/COVER.

B. Darvish Rouhani, M. Sadegh Riazi, and F. Koushanfar, “DeepSecure: Scalable Provably-Secure Deep Learning,” ArXiv, p. arXiv:1705.08963, May 2017, doi: 10.48550/ARXIV.1705.08963.

A. Thantharate, “FED6G: Federated Chameleon Learning for Network Slice Management in Beyond 5G Systems,” 2022 IEEE 13th Annual Information Technology, Electronics and Mobile Communication Conference, IEMCON 2022, pp. 19–25, 2022, doi: 10.1109/IEMCON56893.2022.9946488.

F. Boemer, R. Cammarota, D. Demmler, T. Schneider, and H. Yalame, “MP2ML: A mixed-protocol machine learning framework for private inference,” ACM International Conference Proceeding Series, Aug. 2020, doi: 10.1145/3407023.3407045.

W.-S. Choi, B. Reagen, G.-Y. Wei, and D. Brooks, “Impala: Low-Latency, Communication-Efficient Private Deep Learning Inference,” May 2022, Accessed: Mar. 27, 2023. [Online]. Available: https://arxiv.org/abs/2205.06437v1

V. S. Naresh and M. Thamarai, “Privacy-preserving data mining and machine learning in healthcare: Applications, challenges, and solutions,” Wiley Interdiscip Rev Data Min Knowl Discov, vol. 13, no. 2, p. e1490, Mar. 2023, doi: 10.1002/WIDM.1490.

Y. Cai, Q. Zhang, R. Ning, C. Xin, and H. Wu, “Hunter: HE-Friendly Structured Pruning for Efficient Privacy-Preserving Deep Learning,” ASIA CCS 2022 - Proceedings of the 2022 ACM Asia Conference on Computer and Communications Security, pp. 931–945, May 2022, doi: 10.1145/3488932.3517401.

C. Juvekar, M. Mtl, V. Vaikuntanathan, and A. Chandrakasan, “GAZELLE: A Low Latency Framework for Secure Neural Network Inference”, Accessed: Mar. 27, 2023. [Online]. Available: https://www.usenix.org/conference/usenixsecurity18/presentation/juvekar

Z. Zhou, Q. Fu, Q. Wei, and Q. Li, “LEGO: A hybrid toolkit for efficient 2PC-based privacy-preserving machine learning,” Comput Secur, vol. 120, p. 102782, Sep. 2022, doi: 10.1016/J.COSE.2022.102782.

M. Sadegh Riazi, M. Samragh, H. Chen, K. Laine, K. Lauter, and F. Koushanfar, “XONN: XNOR-based Oblivious Deep Neural Network Inference,” ArXiv, p. arXiv:1902.07342, Feb. 2019, doi: 10.48550/ARXIV.1902.07342.

Y. Hong et al., “A Privacy-Preserving Distributed Machine Learning Protocol Based on Homomorphic Hash Authentication,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 13787 LNCS, pp. 374–386, 2022, doi: 10.1007/978-3-031-23020-2_21/COVER.

W. Du, M. Li, X. Yang, L. Wu, and T. Zhou, “VCFL: A verifiable and collusion attack resistant privacy preserving framework for cross-silo federated learning,” Pervasive Mob Comput, vol. 86, p. 101697, Oct. 2022, doi: 10.1016/J.PMCJ.2022.101697.

R. Shokri and V. Shmatikov, “Privacy-preserving deep learning,” Proceedings of the ACM Conference on Computer and Communications Security, vol. 2015-October, pp. 1310–1321, Oct. 2015, doi: 10.1145/2810103.2813687.

Y. Wang, S. Ma, Q. Chen, J. Zhuang, and D. Jiang, “A geodesic projection-based data fusion scheme for cooperative spectrum sensing,” Digit Signal Process, p. 104006, Mar. 2023, doi: 10.1016/J.DSP.2023.104006.

P. Adong, E. Bainomugisha, D. Okure, and R. Sserunjogi, “Applying machine learning for large scale field calibration of low-cost PM2.5 and PM10 air pollution sensors,” Applied AI Letters, vol. 3, no. 3, p. e76, Sep. 2022, doi: 10.1002/AIL2.76.

S. Kotwal, P. Rani, T. Arif, J. Manhas, and S. Sharma, “Automated Bacterial Classifications Using Machine Learning Based Computational Techniques: Architectures, Challenges and Open Research Issues,” Archives of Computational Methods in Engineering, vol. 29, no. 4, pp. 2469–2490, Jun. 2022, doi: 10.1007/S11831-021-09660-0/TABLES/5.

S. Shen, T. Zhu, D. Wu, W. Wang, and W. Zhou, “From distributed machine learning to federated learning: In the view of data privacy and security,” Concurr Comput, vol. 34, no. 16, p. e6002, Jul. 2022, doi: 10.1002/CPE.6002.

B. Hitaj, G. Ateniese, and F. Perez-Cruz, “Deep Models under the GAN: Information leakage from collaborative deep learning,” Proceedings of the ACM Conference on Computer and Communications Security, pp. 603–618, Oct. 2017, doi: 10.1145/3133956.3134012.

S. Dave et al., “Special Session: Towards an Agile Design Methodology for Efficient, Reliable, and Secure ML Systems,” Proceedings of the IEEE VLSI Test Symposium, vol. 2022-April, 2022, doi: 10.1109/VTS52500.2021.9794253.

Y. Li, X. Wei, Y. Li, Z. Dong, and M. Shahidehpour, “Detection of False Data Injection Attacks in Smart Grid: A Secure Federated Deep Learning Approach,” IEEE Trans Smart Grid, vol. 13, no. 6, pp. 4862–4872, Nov. 2022, doi: 10.1109/TSG.2022.3204796.

G. Xu et al., “SIMC 2.0: Improved Secure ML Inference Against Malicious Clients,” Jul. 2022, doi: 10.48550/arxiv.2207.04637.

B. Noordijk et al., “baseLess: lightweight detection of sequences in raw MinION data,” Bioinformatics Advances, vol. 3, no. 1, Jan. 2023, doi: 10.1093/BIOADV/VBAD017.

A. Senanayake, H. Gamaarachchi, D. Herath, and R. Ragel, “DeepSelectNet: deep neural network based selective sequencing for oxford nanopore sequencing,” BMC Bioinformatics, vol. 24, no. 1, pp. 1–16, Jan. 2023, doi: 10.1186/S12859-023-05151-0/FIGURES/7.

M. S. Khan, B. Farzaneh, N. Shahriar, N. Saha, and R. Boutaba, “SliceSecure: Impact and Detection of DoS/DDoS Attacks on 5G Network Slices,” 2022 IEEE Future Networks World Forum (FNWF), pp. 639–642, Oct. 2022, doi: 10.1109/FNWF55208.2022.00117.

R. Patan and R. M. Parizi, “Performance Improvement of Blockchain-based IoT Applications using Deep Learning Techniques,” 2022 4th International Conference on Blockchain Computing and Applications, BCCA 2022, pp. 151–158, 2022, doi: 10.1109/BCCA55292.2022.9922342.

A. Chohra, P. Shirani, E. M. B. Karbab, and M. Debbabi, “Chameleon: Optimized feature selection using particle swarm optimization and ensemble methods for network anomaly detection,” Comput Secur, vol. 117, p. 102684, Jun. 2022, doi: 10.1016/J.COSE.2022.102684.

H. C. Tanuwidjaja, R. Choi, and K. Kim, “A Survey on Deep Learning Techniques for Privacy-Preserving,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11806 LNCS, pp. 29–46, 2019, doi: 10.1007/978-3-030-30619-9_4/COVER.

H. Tian et al., “Sphinx: Enabling Privacy-Preserving Online Learning over the Cloud,” Proc IEEE Symp Secur Priv, vol. 2022-May, pp. 2487–2501, 2022, doi: 10.1109/SP46214.2022.9833648.

A. Boulemtafes, A. Derhab, and Y. Challal, “A review of privacy-preserving techniques for deep learning,” Neurocomputing, vol. 384, pp. 21–45, Apr. 2020, doi: 10.1016/j.neucom.2019.11.041.

M. Sadegh Riazi et al., “Xonn: XNOR-based Oblivious Deep Neural Network Inference”, Accessed: Mar. 16, 2023. [Online]. Available: https://www.usenix.org/conference/usenixsecurity19/presentation/riazi

M. S. Riazi, B. Darvish Rouani, and F. Koushanfar, “Deep Learning on Private Data,” IEEE Secur Priv, vol. 17, no. 6, pp. 54–63, Nov. 2019, doi: 10.1109/MSEC.2019.2935666.

K. Murdock, D. Oswald, F. D. Garcia, J. Van Bulck, D. Gruss, and F. Piessens, “Plundervolt: Software-based Fault Injection Attacks against Intel SGX,” 2020, doi: 10.1109/SP40000.2020.00057.

P. Yuhala et al., “Montsalvat: Intel SGX shielding for GraalVM native images Montsalvat: Intel SGX Shielding for GraalVM Native Images CCS CONCEPTS,” 22nd International Middleware Conference (Middleware ’21), December 6â•fi10, 2021, Virtual Event, Canada, vol. 1, no. 2, pp. 352–364, 2021, doi: 10.1145/3464298.3493406ï.

K. Bonawitz et al., “Practical secure aggregation for privacy-preserving machine learning,” Proceedings of the ACM Conference on Computer and Communications Security, pp. 1175–1191, Oct. 2017, doi: 10.1145/3133956.3133982.

G. Lin, N. Sun, S. Nepal, J. Zhang, Y. Xiang, and H. Hassan, “Statistical Twitter Spam Detection Demystified: Performance, Stability and Scalability,” IEEE Access, vol. 5, pp. 11142–11154, 2017, doi: 10.1109/ACCESS.2017.2710540.

J. Chen, W. H. Wang, and X. Shi, “Differential Privacy Protection Against Membership Inference Attack on Machine Learning for Genomic Data,” in Biocomputing 2021, WORLD SCIENTIFIC, Nov. 2020, pp. 26–37. doi: 10.1142/9789811232701_0003.

S. Guo, T. Zhang, G. Xu, H. Yu, T. Xiang, and Y. Liu, “Topology-Aware Differential Privacy for Decentralized Image Classification,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 6, pp. 4016–4027, Jun. 2022, doi: 10.1109/TCSVT.2021.3105723.

S. Butt, M. Tariq, T. Jamal, A. Ali, … J. M.-I., and undefined 2019, “Predictive variables for agile development merging cloud computing services,” ieeexplore.ieee.org, Accessed: Mar. 16, 2023. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/8765563/

M. T.-K. T. on I. and I. Systems and undefined 2019, “Agent based information security framework for hybrid cloud computing,” koreascience.or.kr, Accessed: Mar. 16, 2023. [Online]. Available: https://www.koreascience.or.kr/article/JAKO201912261948438.page

N. Carlini, D. W.-2017 ieee symposium on security and, and undefined 2017, “Towards evaluating the robustness of neural networks,” ieeexplore.ieee.org, Accessed: Mar. 16, 2023. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/7958570/

… M. T.-J. of F. G. C. and and undefined 2018, “Analysis of the effectiveness of cloud control matrix for hybrid cloud computing,” researchgate.net, vol. 11, no. 4, pp. 1–10, 2018, doi: 10.14257/ijfgcn.2018.11.4.01.

X. Song, L. Wang, and X. Luo, “Airfoil optimization using a machine learning-based optimization algorithm,” J Phys Conf Ser, vol. 2217, no. 1, p. 012009, Apr. 2022, doi: 10.1088/1742-6596/2217/1/012009.

C. Szegedy et al., “Intriguing properties of neural networks,” 2nd International Conference on Learning Representations, ICLR 2014 - Conference Track Proceedings, Dec. 2013, doi: 10.48550/arxiv.1312.6199.

W. Li et al., “Spear and Shield: Attack and Detection for CNN-Based High Spatial Resolution Remote Sensing Images Identification,” IEEE Access, vol. 7, pp. 94583–94592, 2019, doi: 10.1109/ACCESS.2019.2927376.

H. Hirano, A. Minagi, and K. Takemoto, “Universal adversarial attacks on deep neural networks for medical image classification,” BMC Med Imaging, vol. 21, no. 1, pp. 1–13, Dec. 2021, doi: 10.1186/S12880-020-00530-Y/FIGURES/6.

N. Akhtar and A. Mian, “Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey,” IEEE Access, vol. 6, pp. 14410–14430, 2018, doi: 10.1109/ACCESS.2018.2807385.

P. Tabacof, J. Tavares, and E. Valle, “Adversarial Images for Variational Autoencoders,” Dec. 2016, Accessed: Mar. 20, 2023. [Online]. Available: http://arxiv.org/abs/1612.00155

N. Akhtar and A. Mian, “Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey”.

N. Papernot, P. McDaniel, … A. S.-M. 2016-2016, and undefined 2016, “Crafting adversarial input sequences for recurrent neural networks,” ieeexplore.ieee.org, Accessed: Mar. 20, 2023. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/7795300/

Y. Song, T. Kim, S. Nowozin, … S. E. preprint arXiv, and undefined 2017, “Pixeldefend: Leveraging generative models to understand and defend against adversarial examples,” arxiv.org, Accessed: Mar. 20, 2023. [Online]. Available: https://arxiv.org/abs/1710.10766

D. Rumelhart, G. Hinton, R. W.- nature, and undefined 1986, “Learning representations by back-propagating errors,” nature.com, Accessed: Mar. 20, 2023. [Online]. Available: https://www.nature.com/articles/323533a0

Y. Song, T. Kim, S. Nowozin, … S. E. preprint arXiv, and undefined 2017, “Pixeldefend: Leveraging generative models to understand and defend against adversarial examples,” arxiv.org, Accessed: Mar. 20, 2023. [Online]. Available: https://arxiv.org/abs/1710.10766

X. Pan et al., “Characterizing attacks on deep reinforcement learning,” arxiv.org, Accessed: Mar. 20, 2023. [Online]. Available: https://arxiv.org/abs/1907.09470

J. Sun et al., “Stealthy and efficient adversarial attacks against deep reinforcement learning,” ojs.aaai.org, Accessed: Mar. 20, 2023. [Online]. Available: https://ojs.aaai.org/index.php/AAAI/article/view/6047

S. Cho, T. J. Jun, B. Oh, and D. Kim, “DAPAS : Denoising Autoencoder to Prevent Adversarial attack in Semantic Segmentation”.

Z. Wang, B. Wang, Y. Liu, and J. Guo, “Global Feature Attention Network: Addressing the Threat of Adversarial Attack for Aerial Image Semantic Segmentation,” Remote Sensing 2023, Vol. 15, Page 1325, vol. 15, no. 5, p. 1325, Feb. 2023, doi: 10.3390/RS15051325.

Y. M. Khedr, Y. Xiong, and K. He, “Semantic Adversarial Attacks on Face Recognition through Significant Attributes,” Jan. 2023, Accessed: Mar. 20, 2023. [Online]. Available: http://arxiv.org/abs/2301.12046

K. Ren, T. Zheng, Z. Qin, and X. Liu, “Adversarial Attacks and Defenses in Deep Learning,” Engineering, vol. 6, no. 3, pp. 346–360, Mar. 2020, doi: 10.1016/J.ENG.2019.12.012.

Y. Chen, “Celestial image classification based on deep learning and FGSM attack,” https://doi.org/10.1117/12.2656011, vol. 12509, pp. 671–676, Jan. 2023, doi: 10.1117/12.2656011.

S. Agnihotri and M. Keuper, “CosPGD: a unified white-box adversarial attack for pixel-wise prediction tasks,” Feb. 2023, Accessed: Mar. 22, 2023. [Online]. Available: https://arxiv.org/abs/2302.02213v1

A. Raghunathan, J. Steinhardt, and P. Liang, “Certified defenses against adversarial examples,” 6th International Conference on Learning Representations, ICLR 2018 - Conference Track Proceedings, 2018.

A. Raghunathan, … J. S.-A. in neural, and undefined 2018, “Semidefinite relaxations for certifying robustness to adversarial examples,” proceedings.neurips.cc, Accessed: Mar. 23, 2023. [Online]. Available: https://proceedings.neurips.cc/paper/2018/hash/29c0605a3bab4229e46723f89cf59d83-Abstract.html

Y. Guo, C. Zhang, C. Zhang, and Y. Chen, “Sparse dnns with improved adversarial robustness,” proceedings.neurips.cc, Accessed: Mar. 23, 2023. [Online]. Available: https://proceedings.neurips.cc/paper/2018/hash/4c5bde74a8f110656874902f07378009-Abstract.html

K. Xiao, V. Tjeng, N. Shafiullah, A. M. preprint arXiv, and undefined 2018, “Training for faster adversarial robustness verification via inducing relu stability,” arxiv.org, Accessed: Mar. 23, 2023. [Online]. Available: https://arxiv.org/abs/1809.03008

T.-W. Weng et al., “Evaluating the robustness of neural networks: An extreme value theory approach,” arxiv.org, Accessed: Mar. 23, 2023. [Online]. Available: https://arxiv.org/abs/1801.10578

C. Sitawarin and D. Wagner, “Minimum-Norm Adversarial Examples on KNN and KNN based Models,” Proceedings - 2020 IEEE Symposium on Security and Privacy Workshops, SPW 2020, pp. 34–40, May 2020, doi: 10.1109/SPW50608.2020.00023.

C. Sitawarin and D. Wagner, “On the robustness of deep K-nearest neighbors,” Proceedings - 2019 IEEE Symposium on Security and Privacy Workshops, SPW 2019, pp. 1–7, May 2019, doi: 10.1109/SPW.2019.00014.

Y.-Y. Yang, “Adversarial Examples for Non-Parametric Methods: Attacks, Defenses and Large Sample Limits.,” CoRR, vol. abs/1906.03310, 2019, Accessed: Mar. 23, 2023. [Online]. Available: http://arxiv.org/abs/1906.03310

H. Sasahara and H. Sandberg, “Asymptotic Security using Bayesian Defense Mechanism with Application to Cyber Deception,” IEEE JOURNAL, vol. XX, Jan. 2022, Accessed: Mar. 23, 2023. [Online]. Available: https://arxiv.org/abs/2201.02351v2

M. Yin, S. Li, C. Song, M. S. Asif, A. K. Roy-Chowdhury, and S. V. Krishnamurthy, “ADC: Adversarial Attacks Against Object Detection That Evade Context Consistency Checks.” pp. 3278–3287, 2022.

Y. Man, R. Muller, M. Li, Z. Berkay Celik, and R. Gerdes, “That Person Moves Like A Car: Misclassification Attack Detection for Autonomous Systems Using Spatiotemporal Consistency”.

L. Zhang, Y. Pan, Y. Liu, Q. Zheng, and Z. Pan, “Multiple Domain Cyberspace Attack and Defense Game Based on Reward Randomization Reinforcement Learning,” XXXX, vol. 16, no. 7, p. 1, May 2022, Accessed: Mar. 23, 2023. [Online]. Available: https://arxiv.org/abs/2205.10990v1

Similar Articles

<< < 4 5 6 7 8 9 10 11 12 13 > >> 

You may also start an advanced similarity search for this article.