Why do you have to rerun the model after getting a Nan response to your epochs/learning rate combo?

Not sure if got it right.
Model during training stores coefficients inside itself. When you init/reinit your model, you start from random coefficients.
If you trained your model badly, so that is shows no signs of converging, it’s better to restart from scratch. It saves time.
Of course, you can continue training of a badly trained model with a proper learning rate, but it will take much more time than if you start from the beginning.
Courtesy- @worminhole