tgoop.com/biasvariance_ir/143
Last Update:
شبکه های عمیق تبدیل به یک نام جایگزین برای شبکه های عصبی شده اند. با این حال افزایش عرض شبکه ها نیز تاثیرات خود را می تواند داشته باشد. از سویی، توزیع داده ها یکی از مواردی است که نقش بسیار مهمی در شبکه های عصبی دارد. چند روز پیش مقاله مهمی ارایه شد که می خوانیم:
Wide Neural Networks Forget Less Catastrophically
A growing body of research in continual learning is devoted to overcoming the “Catastrophic Forgetting” of neural networks by designing new algorithms that are more robust to the distribution shifts. While the recent progress in continual learning literature is encouraging, our understanding of what properties of neural networks contribute to catastrophic forgetting is still limited. To address this, instead of focusing on continual learning algorithms, in this work, we focus on the model itself and study the impact of “width” of the neural network architecture on catastrophic forgetting, and show that width has a surprisingly significant effect on forgetting. To explain this effect, we study the learning dynamics of the network from various perspectives such as gradient norm and sparsity, orthogonalization, and lazy training regime. We provide potential explanations that are consistent with the empirical results across different architectures and continual learning benchmarks.
لینک مقاله
#معرفی_مقاله #یادگیری_عمیق
🌴 سایت | 🌺 کانال | 🌳 پشتیبانی
BY Bias Variance

Share with your friend now:
tgoop.com/biasvariance_ir/143