It is really important to have an actual understanding of the math behind most of deep learning. And in many cases, in the future, you will kind of the way backpropagation. You’ll just kind of assume it works based on a framework, software package that you might use. But that sometimes leads you to not understand why your model might not be working. In theory, you say it’s just abstracted away, I don’t have to worry about it anymore. But really in practice, in the optimization, you might run into problems. Any if you don’t understand backpropagation, you don’t know why you will have these problems. And so in addition to that, we kinda wanna prepare you to not just be a user of deep learning,but maybe even eventually do research. In this field and maybe think of and implement and be very very good at debugging completely new kinds of models. And you’ll observe that depending on which software package you 'll use in the future, not everything is supported in some of these frameworks. So if you want to create a completely new model that’s sort of outside the convex,knowing things, you will need to implement the forward and the backward propagation for a new sub-module that you might have invented. It’s useful.