tgoop.com/awesomedeeplearning/153
Last Update:
The biggest mistake I see with deep learning practitioners new to the PyTorch library is forgetting and/or mixing up the following steps:
1) Zeroing out gradients from the previous steps (opt.zero_grad())
2)Performing backpropagation (loss.backward())
3)Updating model parameters (opt.step())
Failure to perform these steps in this exact order is a surefire way to shoot yourself in the foot when using PyTorch, and worse, PyTorch doesn’t report an error if you mix up these steps, so you may not even know you shot yourself!
The PyTorch library is super powerful, but you’ll need to get used to the fact that training a neural network with PyTorch is like taking off your bicycle’s training wheels — there’s no safety net to catch you if you mix up important steps (unlike with Keras/TensorFlow which allow you to encapsulate entire training procedures into a single model.fit call).
That’s not to say that Keras/TensorFlow are “better” than PyTorch — it’s just a difference between the two deep learning libraries of which you need to be aware. Source: pyimageserch
BY GenAi, Deep Learning and Computer Vision
Share with your friend now:
tgoop.com/awesomedeeplearning/153