tgoop.com/biasvariance_ir/76
Last Update:
یکی از مشکلاتی که شبکه های عصبی را تهدید می کند adversarial attack است که سبب می شود در کاربرهای حساس، استفاده از یادگیری عمیق تحت الشعاع قرار گیرد. در این زمینه کارهای زیادی صورت گرفته است. اخیرا مقاله ای چاپ شده است پیرامون این موضوع برای auto-driving سیستمها.
Deep learning-based auto-driving systems are vulnerable to adversarial examples attacks which may result in wrong decision making and accidents. An adversarial example can fool the well trained neural networks by adding barely imperceptible perturbations to clean data. In this paper, we explore the mechanism of adversarial examples and adversarial robustness from the perspective of statistical mechanics, and propose an statistical mechanics-based interpretation model of adversarial robustness. The state transition caused by adversarial training based on the theory of fluctuation dissipation disequilibrium in statistical mechanics is formally constructed. Besides, we fully study the adversarial example attacks and training process on system robustness, including the influence of different training processes on network robustness. Our work is helpful to understand and explain the adversarial examples problems and improve the robustness of deep learning-based auto-driving systems.
https://ieeexplore.ieee.org/abstract/document/9539019
#معرفی_مقاله #یادگیری_عمیق #adversarial
🌴 سایت | 🌺 کانال | 🌳 پشتیبانی
BY Bias Variance

Share with your friend now:
tgoop.com/biasvariance_ir/76