Notice: file_put_contents(): Write of 13890 bytes failed with errno=28 No space left on device in /var/www/tgoop/post.php on line 50

Warning: file_put_contents(): Only 4096 of 17986 bytes written, possibly out of free disk space in /var/www/tgoop/post.php on line 50
Bias Variance@biasvariance_ir P.76
BIASVARIANCE_IR Telegram 76
یکی از مشکلاتی که شبکه های عصبی را تهدید می کند adversarial attack است که سبب می شود در کاربرهای حساس، استفاده از یادگیری عمیق تحت الشعاع قرار گیرد. در این زمینه کارهای زیادی صورت گرفته است. اخیرا مقاله ای چاپ شده است پیرامون این موضوع برای auto-driving سیستمها.

Deep learning-based auto-driving systems are vulnerable to adversarial examples attacks which may result in wrong decision making and accidents. An adversarial example can fool the well trained neural networks by adding barely imperceptible perturbations to clean data. In this paper, we explore the mechanism of adversarial examples and adversarial robustness from the perspective of statistical mechanics, and propose an statistical mechanics-based interpretation model of adversarial robustness. The state transition caused by adversarial training based on the theory of fluctuation dissipation disequilibrium in statistical mechanics is formally constructed. Besides, we fully study the adversarial example attacks and training process on system robustness, including the influence of different training processes on network robustness. Our work is helpful to understand and explain the adversarial examples problems and improve the robustness of deep learning-based auto-driving systems.

https://ieeexplore.ieee.org/abstract/document/9539019



#معرفی_مقاله #یادگیری_عمیق #adversarial

🌴 سایت | 🌺 کانال | 🌳 پشتیبانی



tgoop.com/biasvariance_ir/76
Create:
Last Update:

یکی از مشکلاتی که شبکه های عصبی را تهدید می کند adversarial attack است که سبب می شود در کاربرهای حساس، استفاده از یادگیری عمیق تحت الشعاع قرار گیرد. در این زمینه کارهای زیادی صورت گرفته است. اخیرا مقاله ای چاپ شده است پیرامون این موضوع برای auto-driving سیستمها.

Deep learning-based auto-driving systems are vulnerable to adversarial examples attacks which may result in wrong decision making and accidents. An adversarial example can fool the well trained neural networks by adding barely imperceptible perturbations to clean data. In this paper, we explore the mechanism of adversarial examples and adversarial robustness from the perspective of statistical mechanics, and propose an statistical mechanics-based interpretation model of adversarial robustness. The state transition caused by adversarial training based on the theory of fluctuation dissipation disequilibrium in statistical mechanics is formally constructed. Besides, we fully study the adversarial example attacks and training process on system robustness, including the influence of different training processes on network robustness. Our work is helpful to understand and explain the adversarial examples problems and improve the robustness of deep learning-based auto-driving systems.

https://ieeexplore.ieee.org/abstract/document/9539019



#معرفی_مقاله #یادگیری_عمیق #adversarial

🌴 سایت | 🌺 کانال | 🌳 پشتیبانی

BY Bias Variance




Share with your friend now:
tgoop.com/biasvariance_ir/76

View MORE
Open in Telegram


Telegram News

Date: |

Activate up to 20 bots Read now How to Create a Private or Public Channel on Telegram? Select: Settings – Manage Channel – Administrators – Add administrator. From your list of subscribers, select the correct user. A new window will appear on the screen. Check the rights you’re willing to give to your administrator. Hui said the messages, which included urging the disruption of airport operations, were attempts to incite followers to make use of poisonous, corrosive or flammable substances to vandalize police vehicles, and also called on others to make weapons to harm police.
from us


Telegram Bias Variance
FROM American